00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 868 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3528 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.060 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.060 The recommended git tool is: git 00:00:00.061 using credential 00000000-0000-0000-0000-000000000002 00:00:00.062 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.093 Fetching changes from the remote Git repository 00:00:00.095 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.155 Using shallow fetch with depth 1 00:00:00.155 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.155 > git --version # timeout=10 00:00:00.214 > git --version # 'git version 2.39.2' 00:00:00.214 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.253 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.253 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.228 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.242 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.255 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:04.255 > git config core.sparsecheckout # timeout=10 00:00:04.268 > git read-tree -mu HEAD # timeout=10 00:00:04.285 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:04.303 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:04.303 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:04.382 [Pipeline] Start of Pipeline 00:00:04.391 [Pipeline] library 00:00:04.393 Loading library shm_lib@master 00:00:04.393 Library shm_lib@master is cached. Copying from home. 00:00:04.409 [Pipeline] node 00:00:04.433 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.435 [Pipeline] { 00:00:04.445 [Pipeline] catchError 00:00:04.446 [Pipeline] { 00:00:04.456 [Pipeline] wrap 00:00:04.464 [Pipeline] { 00:00:04.473 [Pipeline] stage 00:00:04.475 [Pipeline] { (Prologue) 00:00:04.675 [Pipeline] sh 00:00:05.586 + logger -p user.info -t JENKINS-CI 00:00:05.620 [Pipeline] echo 00:00:05.622 Node: GP11 00:00:05.630 [Pipeline] sh 00:00:05.983 [Pipeline] setCustomBuildProperty 00:00:05.996 [Pipeline] echo 00:00:05.997 Cleanup processes 00:00:06.003 [Pipeline] sh 00:00:06.297 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.297 4751 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.313 [Pipeline] sh 00:00:06.612 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.612 ++ grep -v 'sudo pgrep' 00:00:06.612 ++ awk '{print $1}' 00:00:06.612 + sudo kill -9 00:00:06.612 + true 00:00:06.631 [Pipeline] cleanWs 00:00:06.644 [WS-CLEANUP] Deleting project workspace... 00:00:06.644 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.662 [WS-CLEANUP] done 00:00:06.666 [Pipeline] setCustomBuildProperty 00:00:06.685 [Pipeline] sh 00:00:06.981 + sudo git config --global --replace-all safe.directory '*' 00:00:07.068 [Pipeline] httpRequest 00:00:09.477 [Pipeline] echo 00:00:09.479 Sorcerer 10.211.164.101 is alive 00:00:09.490 [Pipeline] retry 00:00:09.492 [Pipeline] { 00:00:09.508 [Pipeline] httpRequest 00:00:09.514 HttpMethod: GET 00:00:09.515 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.516 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.531 Response Code: HTTP/1.1 200 OK 00:00:09.531 Success: Status code 200 is in the accepted range: 200,404 00:00:09.532 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:13.195 [Pipeline] } 00:00:13.215 [Pipeline] // retry 00:00:13.225 [Pipeline] sh 00:00:13.525 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:13.546 [Pipeline] httpRequest 00:00:13.954 [Pipeline] echo 00:00:13.956 Sorcerer 10.211.164.101 is alive 00:00:13.968 [Pipeline] retry 00:00:13.970 [Pipeline] { 00:00:13.985 [Pipeline] httpRequest 00:00:13.991 HttpMethod: GET 00:00:13.992 URL: http://10.211.164.101/packages/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:13.993 Sending request to url: http://10.211.164.101/packages/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:14.003 Response Code: HTTP/1.1 200 OK 00:00:14.004 Success: Status code 200 is in the accepted range: 200,404 00:00:14.004 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:01:24.960 [Pipeline] } 00:01:24.977 [Pipeline] // retry 00:01:24.984 [Pipeline] sh 00:01:25.285 + tar --no-same-owner -xf spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:01:27.840 [Pipeline] sh 00:01:28.134 + git -C spdk log --oneline -n5 00:01:28.134 bbce7a874 event: move struct spdk_lw_thread to internal header 00:01:28.134 5031f0f3b module/raid: Assign bdev_io buffers to raid_io 00:01:28.134 dc3ea9d27 bdevperf: Allocate an md buffer for verify op 00:01:28.134 0ce363beb spdk_log: introduce spdk_log_ext API 00:01:28.134 412fced1b bdev/compress: unmap support. 00:01:28.153 [Pipeline] withCredentials 00:01:28.166 > git --version # timeout=10 00:01:28.177 > git --version # 'git version 2.39.2' 00:01:28.207 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:28.209 [Pipeline] { 00:01:28.218 [Pipeline] retry 00:01:28.219 [Pipeline] { 00:01:28.233 [Pipeline] sh 00:01:28.820 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:31.391 [Pipeline] } 00:01:31.409 [Pipeline] // retry 00:01:31.413 [Pipeline] } 00:01:31.429 [Pipeline] // withCredentials 00:01:31.438 [Pipeline] httpRequest 00:01:31.847 [Pipeline] echo 00:01:31.849 Sorcerer 10.211.164.101 is alive 00:01:31.859 [Pipeline] retry 00:01:31.861 [Pipeline] { 00:01:31.874 [Pipeline] httpRequest 00:01:31.880 HttpMethod: GET 00:01:31.880 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:31.882 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:31.888 Response Code: HTTP/1.1 200 OK 00:01:31.889 Success: Status code 200 is in the accepted range: 200,404 00:01:31.889 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:57.855 [Pipeline] } 00:01:57.870 [Pipeline] // retry 00:01:57.878 [Pipeline] sh 00:01:58.170 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:00.095 [Pipeline] sh 00:02:00.386 + git -C dpdk log --oneline -n5 00:02:00.387 eeb0605f11 version: 23.11.0 00:02:00.387 238778122a doc: update release notes for 23.11 00:02:00.387 46aa6b3cfc doc: fix description of RSS features 00:02:00.387 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:00.387 7e421ae345 devtools: support skipping forbid rule check 00:02:00.398 [Pipeline] } 00:02:00.412 [Pipeline] // stage 00:02:00.421 [Pipeline] stage 00:02:00.423 [Pipeline] { (Prepare) 00:02:00.442 [Pipeline] writeFile 00:02:00.457 [Pipeline] sh 00:02:00.749 + logger -p user.info -t JENKINS-CI 00:02:00.763 [Pipeline] sh 00:02:01.053 + logger -p user.info -t JENKINS-CI 00:02:01.066 [Pipeline] sh 00:02:01.357 + cat autorun-spdk.conf 00:02:01.357 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.357 SPDK_TEST_NVMF=1 00:02:01.357 SPDK_TEST_NVME_CLI=1 00:02:01.357 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.357 SPDK_TEST_NVMF_NICS=e810 00:02:01.357 SPDK_TEST_VFIOUSER=1 00:02:01.357 SPDK_RUN_UBSAN=1 00:02:01.357 NET_TYPE=phy 00:02:01.357 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:01.357 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:01.366 RUN_NIGHTLY=1 00:02:01.370 [Pipeline] readFile 00:02:01.408 [Pipeline] withEnv 00:02:01.410 [Pipeline] { 00:02:01.423 [Pipeline] sh 00:02:01.714 + set -ex 00:02:01.714 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:01.714 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:01.714 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.714 ++ SPDK_TEST_NVMF=1 00:02:01.714 ++ SPDK_TEST_NVME_CLI=1 00:02:01.714 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.714 ++ SPDK_TEST_NVMF_NICS=e810 00:02:01.714 ++ SPDK_TEST_VFIOUSER=1 00:02:01.714 ++ SPDK_RUN_UBSAN=1 00:02:01.714 ++ NET_TYPE=phy 00:02:01.714 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:01.714 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:01.714 ++ RUN_NIGHTLY=1 00:02:01.714 + case $SPDK_TEST_NVMF_NICS in 00:02:01.714 + DRIVERS=ice 00:02:01.714 + [[ tcp == \r\d\m\a ]] 00:02:01.714 + [[ -n ice ]] 00:02:01.714 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:01.714 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:05.020 rmmod: ERROR: Module irdma is not currently loaded 00:02:05.020 rmmod: ERROR: Module i40iw is not currently loaded 00:02:05.020 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:05.020 + true 00:02:05.020 + for D in $DRIVERS 00:02:05.020 + sudo modprobe ice 00:02:05.020 + exit 0 00:02:05.031 [Pipeline] } 00:02:05.046 [Pipeline] // withEnv 00:02:05.051 [Pipeline] } 00:02:05.064 [Pipeline] // stage 00:02:05.073 [Pipeline] catchError 00:02:05.075 [Pipeline] { 00:02:05.089 [Pipeline] timeout 00:02:05.089 Timeout set to expire in 1 hr 0 min 00:02:05.091 [Pipeline] { 00:02:05.105 [Pipeline] stage 00:02:05.107 [Pipeline] { (Tests) 00:02:05.121 [Pipeline] sh 00:02:05.414 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:05.415 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:05.415 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:05.415 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:05.415 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.415 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:05.415 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:05.415 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:05.415 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:05.415 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:05.415 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:05.415 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:05.415 + source /etc/os-release 00:02:05.415 ++ NAME='Fedora Linux' 00:02:05.415 ++ VERSION='39 (Cloud Edition)' 00:02:05.415 ++ ID=fedora 00:02:05.415 ++ VERSION_ID=39 00:02:05.415 ++ VERSION_CODENAME= 00:02:05.415 ++ PLATFORM_ID=platform:f39 00:02:05.415 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:05.415 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:05.415 ++ LOGO=fedora-logo-icon 00:02:05.415 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:05.415 ++ HOME_URL=https://fedoraproject.org/ 00:02:05.415 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:05.415 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:05.415 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:05.415 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:05.415 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:05.415 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:05.415 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:05.415 ++ SUPPORT_END=2024-11-12 00:02:05.415 ++ VARIANT='Cloud Edition' 00:02:05.415 ++ VARIANT_ID=cloud 00:02:05.415 + uname -a 00:02:05.415 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:05.415 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:06.358 Hugepages 00:02:06.358 node hugesize free / total 00:02:06.358 node0 1048576kB 0 / 0 00:02:06.358 node0 2048kB 0 / 0 00:02:06.358 node1 1048576kB 0 / 0 00:02:06.358 node1 2048kB 0 / 0 00:02:06.358 00:02:06.358 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:06.358 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:06.358 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:06.358 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:06.359 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:06.359 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:06.359 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:06.359 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:06.359 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:06.359 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:06.359 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:06.359 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:06.359 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:06.359 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:06.359 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:06.359 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:06.359 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:06.359 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:06.359 + rm -f /tmp/spdk-ld-path 00:02:06.359 + source autorun-spdk.conf 00:02:06.359 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.359 ++ SPDK_TEST_NVMF=1 00:02:06.359 ++ SPDK_TEST_NVME_CLI=1 00:02:06.359 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.359 ++ SPDK_TEST_NVMF_NICS=e810 00:02:06.359 ++ SPDK_TEST_VFIOUSER=1 00:02:06.359 ++ SPDK_RUN_UBSAN=1 00:02:06.359 ++ NET_TYPE=phy 00:02:06.359 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:06.359 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:06.359 ++ RUN_NIGHTLY=1 00:02:06.359 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:06.359 + [[ -n '' ]] 00:02:06.359 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.359 + for M in /var/spdk/build-*-manifest.txt 00:02:06.359 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:06.359 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:06.359 + for M in /var/spdk/build-*-manifest.txt 00:02:06.359 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:06.359 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:06.359 + for M in /var/spdk/build-*-manifest.txt 00:02:06.359 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:06.359 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:06.359 ++ uname 00:02:06.359 + [[ Linux == \L\i\n\u\x ]] 00:02:06.359 + sudo dmesg -T 00:02:06.359 + sudo dmesg --clear 00:02:06.619 + dmesg_pid=6093 00:02:06.619 + [[ Fedora Linux == FreeBSD ]] 00:02:06.619 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.619 + sudo dmesg -Tw 00:02:06.619 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.619 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:06.619 + [[ -x /usr/src/fio-static/fio ]] 00:02:06.619 + export FIO_BIN=/usr/src/fio-static/fio 00:02:06.619 + FIO_BIN=/usr/src/fio-static/fio 00:02:06.619 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:06.619 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:06.619 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:06.619 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.619 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.620 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:06.620 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.620 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.620 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:06.620 Test configuration: 00:02:06.620 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.620 SPDK_TEST_NVMF=1 00:02:06.620 SPDK_TEST_NVME_CLI=1 00:02:06.620 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.620 SPDK_TEST_NVMF_NICS=e810 00:02:06.620 SPDK_TEST_VFIOUSER=1 00:02:06.620 SPDK_RUN_UBSAN=1 00:02:06.620 NET_TYPE=phy 00:02:06.620 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:06.620 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:06.620 RUN_NIGHTLY=1 22:25:09 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:06.620 22:25:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:06.620 22:25:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:06.620 22:25:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:06.620 22:25:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.620 22:25:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.620 22:25:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.620 22:25:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.620 22:25:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.620 22:25:09 -- paths/export.sh@5 -- $ export PATH 00:02:06.620 22:25:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.620 22:25:09 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:06.620 22:25:09 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:06.620 22:25:09 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728678309.XXXXXX 00:02:06.620 22:25:09 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728678309.WOFnvt 00:02:06.620 22:25:09 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:06.620 22:25:09 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:02:06.620 22:25:09 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:06.620 22:25:09 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:06.620 22:25:09 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:06.620 22:25:09 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:06.620 22:25:09 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:06.620 22:25:09 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:06.620 22:25:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.620 22:25:09 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:06.620 22:25:09 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:06.620 22:25:09 -- pm/common@17 -- $ local monitor 00:02:06.620 22:25:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.620 22:25:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.620 22:25:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.620 22:25:09 -- pm/common@21 -- $ date +%s 00:02:06.620 22:25:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.620 22:25:09 -- pm/common@21 -- $ date +%s 00:02:06.620 22:25:09 -- pm/common@25 -- $ sleep 1 00:02:06.620 22:25:09 -- pm/common@21 -- $ date +%s 00:02:06.620 22:25:09 -- pm/common@21 -- $ date +%s 00:02:06.620 22:25:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728678309 00:02:06.620 22:25:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728678309 00:02:06.620 22:25:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728678309 00:02:06.620 22:25:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728678309 00:02:06.620 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728678309_collect-cpu-temp.pm.log 00:02:06.620 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728678309_collect-vmstat.pm.log 00:02:06.620 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728678309_collect-cpu-load.pm.log 00:02:06.620 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728678309_collect-bmc-pm.bmc.pm.log 00:02:07.564 22:25:10 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:07.564 22:25:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:07.564 22:25:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:07.564 22:25:10 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.564 22:25:10 -- spdk/autobuild.sh@16 -- $ date -u 00:02:07.564 Fri Oct 11 08:25:10 PM UTC 2024 00:02:07.564 22:25:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:07.564 v25.01-pre-55-gbbce7a874 00:02:07.564 22:25:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:07.564 22:25:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:07.564 22:25:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:07.564 22:25:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:07.564 22:25:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:07.564 22:25:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.564 ************************************ 00:02:07.564 START TEST ubsan 00:02:07.564 ************************************ 00:02:07.564 22:25:10 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:07.564 using ubsan 00:02:07.564 00:02:07.564 real 0m0.000s 00:02:07.564 user 0m0.000s 00:02:07.564 sys 0m0.000s 00:02:07.564 22:25:10 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:07.564 22:25:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:07.564 ************************************ 00:02:07.564 END TEST ubsan 00:02:07.564 ************************************ 00:02:07.564 22:25:10 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:07.564 22:25:10 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:07.564 22:25:10 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:07.564 22:25:10 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:07.564 22:25:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:07.564 22:25:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.564 ************************************ 00:02:07.564 START TEST build_native_dpdk 00:02:07.564 ************************************ 00:02:07.564 22:25:10 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:07.564 22:25:10 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:07.564 22:25:10 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:07.564 22:25:10 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:07.564 22:25:10 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:07.564 22:25:10 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:07.564 22:25:10 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:07.564 22:25:10 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:07.564 22:25:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:07.564 22:25:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:07.564 22:25:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:07.564 22:25:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:07.564 22:25:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:07.824 eeb0605f11 version: 23.11.0 00:02:07.824 238778122a doc: update release notes for 23.11 00:02:07.824 46aa6b3cfc doc: fix description of RSS features 00:02:07.824 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:07.824 7e421ae345 devtools: support skipping forbid rule check 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:07.824 22:25:10 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:07.825 patching file config/rte_config.h 00:02:07.825 Hunk #1 succeeded at 60 (offset 1 line). 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:07.825 patching file lib/pcapng/rte_pcapng.c 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:07.825 22:25:10 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:07.825 22:25:10 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:14.398 The Meson build system 00:02:14.398 Version: 1.5.0 00:02:14.399 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:14.399 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:14.399 Build type: native build 00:02:14.399 Program cat found: YES (/usr/bin/cat) 00:02:14.399 Project name: DPDK 00:02:14.399 Project version: 23.11.0 00:02:14.399 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.399 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:14.399 Host machine cpu family: x86_64 00:02:14.399 Host machine cpu: x86_64 00:02:14.399 Message: ## Building in Developer Mode ## 00:02:14.399 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.399 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:14.399 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.399 Program python3 found: YES (/usr/bin/python3) 00:02:14.399 Program cat found: YES (/usr/bin/cat) 00:02:14.399 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:14.399 Compiler for C supports arguments -march=native: YES 00:02:14.399 Checking for size of "void *" : 8 00:02:14.399 Checking for size of "void *" : 8 (cached) 00:02:14.399 Library m found: YES 00:02:14.399 Library numa found: YES 00:02:14.399 Has header "numaif.h" : YES 00:02:14.399 Library fdt found: NO 00:02:14.399 Library execinfo found: NO 00:02:14.399 Has header "execinfo.h" : YES 00:02:14.399 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.399 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.399 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.399 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.399 Run-time dependency openssl found: YES 3.1.1 00:02:14.399 Run-time dependency libpcap found: YES 1.10.4 00:02:14.399 Has header "pcap.h" with dependency libpcap: YES 00:02:14.399 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.399 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.399 Compiler for C supports arguments -Wformat: YES 00:02:14.399 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.399 Compiler for C supports arguments -Wformat-security: NO 00:02:14.399 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.399 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.399 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.399 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.399 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.399 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.399 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.399 Compiler for C supports arguments -Wundef: YES 00:02:14.399 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.399 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.399 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.399 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.399 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.399 Program objdump found: YES (/usr/bin/objdump) 00:02:14.399 Compiler for C supports arguments -mavx512f: YES 00:02:14.399 Checking if "AVX512 checking" compiles: YES 00:02:14.399 Fetching value of define "__SSE4_2__" : 1 00:02:14.399 Fetching value of define "__AES__" : 1 00:02:14.399 Fetching value of define "__AVX__" : 1 00:02:14.399 Fetching value of define "__AVX2__" : (undefined) 00:02:14.399 Fetching value of define "__AVX512BW__" : (undefined) 00:02:14.399 Fetching value of define "__AVX512CD__" : (undefined) 00:02:14.399 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:14.399 Fetching value of define "__AVX512F__" : (undefined) 00:02:14.399 Fetching value of define "__AVX512VL__" : (undefined) 00:02:14.399 Fetching value of define "__PCLMUL__" : 1 00:02:14.399 Fetching value of define "__RDRND__" : 1 00:02:14.399 Fetching value of define "__RDSEED__" : (undefined) 00:02:14.399 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:14.399 Fetching value of define "__znver1__" : (undefined) 00:02:14.399 Fetching value of define "__znver2__" : (undefined) 00:02:14.399 Fetching value of define "__znver3__" : (undefined) 00:02:14.399 Fetching value of define "__znver4__" : (undefined) 00:02:14.399 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.399 Message: lib/log: Defining dependency "log" 00:02:14.399 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.399 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.399 Checking for function "getentropy" : NO 00:02:14.399 Message: lib/eal: Defining dependency "eal" 00:02:14.399 Message: lib/ring: Defining dependency "ring" 00:02:14.399 Message: lib/rcu: Defining dependency "rcu" 00:02:14.399 Message: lib/mempool: Defining dependency "mempool" 00:02:14.399 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.399 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.399 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.399 Compiler for C supports arguments -mpclmul: YES 00:02:14.399 Compiler for C supports arguments -maes: YES 00:02:14.399 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.399 Compiler for C supports arguments -mavx512bw: YES 00:02:14.399 Compiler for C supports arguments -mavx512dq: YES 00:02:14.399 Compiler for C supports arguments -mavx512vl: YES 00:02:14.399 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.399 Compiler for C supports arguments -mavx2: YES 00:02:14.399 Compiler for C supports arguments -mavx: YES 00:02:14.399 Message: lib/net: Defining dependency "net" 00:02:14.399 Message: lib/meter: Defining dependency "meter" 00:02:14.399 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.399 Message: lib/pci: Defining dependency "pci" 00:02:14.399 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.399 Message: lib/metrics: Defining dependency "metrics" 00:02:14.399 Message: lib/hash: Defining dependency "hash" 00:02:14.399 Message: lib/timer: Defining dependency "timer" 00:02:14.399 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.399 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:14.399 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:14.399 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:14.399 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:14.399 Message: lib/acl: Defining dependency "acl" 00:02:14.399 Message: lib/bbdev: Defining dependency "bbdev" 00:02:14.399 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:14.399 Run-time dependency libelf found: YES 0.191 00:02:14.399 Message: lib/bpf: Defining dependency "bpf" 00:02:14.399 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:14.399 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.399 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.399 Message: lib/distributor: Defining dependency "distributor" 00:02:14.399 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.399 Message: lib/efd: Defining dependency "efd" 00:02:14.399 Message: lib/eventdev: Defining dependency "eventdev" 00:02:14.399 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:14.399 Message: lib/gpudev: Defining dependency "gpudev" 00:02:14.399 Message: lib/gro: Defining dependency "gro" 00:02:14.399 Message: lib/gso: Defining dependency "gso" 00:02:14.399 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:14.399 Message: lib/jobstats: Defining dependency "jobstats" 00:02:14.399 Message: lib/latencystats: Defining dependency "latencystats" 00:02:14.399 Message: lib/lpm: Defining dependency "lpm" 00:02:14.399 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.399 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:14.399 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:14.399 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:14.399 Message: lib/member: Defining dependency "member" 00:02:14.399 Message: lib/pcapng: Defining dependency "pcapng" 00:02:14.399 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.399 Message: lib/power: Defining dependency "power" 00:02:14.399 Message: lib/rawdev: Defining dependency "rawdev" 00:02:14.399 Message: lib/regexdev: Defining dependency "regexdev" 00:02:14.399 Message: lib/mldev: Defining dependency "mldev" 00:02:14.399 Message: lib/rib: Defining dependency "rib" 00:02:14.399 Message: lib/reorder: Defining dependency "reorder" 00:02:14.399 Message: lib/sched: Defining dependency "sched" 00:02:14.399 Message: lib/security: Defining dependency "security" 00:02:14.399 Message: lib/stack: Defining dependency "stack" 00:02:14.399 Has header "linux/userfaultfd.h" : YES 00:02:14.399 Has header "linux/vduse.h" : YES 00:02:14.399 Message: lib/vhost: Defining dependency "vhost" 00:02:14.399 Message: lib/ipsec: Defining dependency "ipsec" 00:02:14.399 Message: lib/pdcp: Defining dependency "pdcp" 00:02:14.399 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.399 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:14.399 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:14.399 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:14.399 Message: lib/fib: Defining dependency "fib" 00:02:14.399 Message: lib/port: Defining dependency "port" 00:02:14.399 Message: lib/pdump: Defining dependency "pdump" 00:02:14.399 Message: lib/table: Defining dependency "table" 00:02:14.399 Message: lib/pipeline: Defining dependency "pipeline" 00:02:14.399 Message: lib/graph: Defining dependency "graph" 00:02:14.399 Message: lib/node: Defining dependency "node" 00:02:15.786 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.786 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.786 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.786 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.786 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:15.786 Compiler for C supports arguments -Wno-unused-value: YES 00:02:15.786 Compiler for C supports arguments -Wno-format: YES 00:02:15.786 Compiler for C supports arguments -Wno-format-security: YES 00:02:15.786 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:15.786 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:15.786 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:15.786 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:15.786 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.786 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.786 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:15.786 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:15.786 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:15.786 Has header "sys/epoll.h" : YES 00:02:15.786 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:15.786 Configuring doxy-api-html.conf using configuration 00:02:15.786 Configuring doxy-api-man.conf using configuration 00:02:15.786 Program mandb found: YES (/usr/bin/mandb) 00:02:15.786 Program sphinx-build found: NO 00:02:15.786 Configuring rte_build_config.h using configuration 00:02:15.786 Message: 00:02:15.786 ================= 00:02:15.786 Applications Enabled 00:02:15.786 ================= 00:02:15.786 00:02:15.786 apps: 00:02:15.786 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:15.786 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:15.786 test-pmd, test-regex, test-sad, test-security-perf, 00:02:15.786 00:02:15.786 Message: 00:02:15.786 ================= 00:02:15.786 Libraries Enabled 00:02:15.786 ================= 00:02:15.786 00:02:15.786 libs: 00:02:15.786 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:15.786 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:15.786 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:15.786 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:15.786 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:15.786 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:15.786 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:15.786 00:02:15.786 00:02:15.786 Message: 00:02:15.786 =============== 00:02:15.786 Drivers Enabled 00:02:15.786 =============== 00:02:15.786 00:02:15.786 common: 00:02:15.786 00:02:15.786 bus: 00:02:15.786 pci, vdev, 00:02:15.786 mempool: 00:02:15.786 ring, 00:02:15.786 dma: 00:02:15.786 00:02:15.786 net: 00:02:15.786 i40e, 00:02:15.786 raw: 00:02:15.786 00:02:15.786 crypto: 00:02:15.786 00:02:15.786 compress: 00:02:15.786 00:02:15.786 regex: 00:02:15.786 00:02:15.786 ml: 00:02:15.786 00:02:15.786 vdpa: 00:02:15.786 00:02:15.786 event: 00:02:15.786 00:02:15.786 baseband: 00:02:15.786 00:02:15.786 gpu: 00:02:15.786 00:02:15.786 00:02:15.786 Message: 00:02:15.786 ================= 00:02:15.786 Content Skipped 00:02:15.786 ================= 00:02:15.786 00:02:15.786 apps: 00:02:15.786 00:02:15.786 libs: 00:02:15.786 00:02:15.786 drivers: 00:02:15.786 common/cpt: not in enabled drivers build config 00:02:15.786 common/dpaax: not in enabled drivers build config 00:02:15.786 common/iavf: not in enabled drivers build config 00:02:15.786 common/idpf: not in enabled drivers build config 00:02:15.786 common/mvep: not in enabled drivers build config 00:02:15.786 common/octeontx: not in enabled drivers build config 00:02:15.786 bus/auxiliary: not in enabled drivers build config 00:02:15.786 bus/cdx: not in enabled drivers build config 00:02:15.786 bus/dpaa: not in enabled drivers build config 00:02:15.786 bus/fslmc: not in enabled drivers build config 00:02:15.786 bus/ifpga: not in enabled drivers build config 00:02:15.786 bus/platform: not in enabled drivers build config 00:02:15.786 bus/vmbus: not in enabled drivers build config 00:02:15.786 common/cnxk: not in enabled drivers build config 00:02:15.786 common/mlx5: not in enabled drivers build config 00:02:15.786 common/nfp: not in enabled drivers build config 00:02:15.786 common/qat: not in enabled drivers build config 00:02:15.786 common/sfc_efx: not in enabled drivers build config 00:02:15.786 mempool/bucket: not in enabled drivers build config 00:02:15.786 mempool/cnxk: not in enabled drivers build config 00:02:15.786 mempool/dpaa: not in enabled drivers build config 00:02:15.786 mempool/dpaa2: not in enabled drivers build config 00:02:15.786 mempool/octeontx: not in enabled drivers build config 00:02:15.786 mempool/stack: not in enabled drivers build config 00:02:15.786 dma/cnxk: not in enabled drivers build config 00:02:15.786 dma/dpaa: not in enabled drivers build config 00:02:15.786 dma/dpaa2: not in enabled drivers build config 00:02:15.786 dma/hisilicon: not in enabled drivers build config 00:02:15.787 dma/idxd: not in enabled drivers build config 00:02:15.787 dma/ioat: not in enabled drivers build config 00:02:15.787 dma/skeleton: not in enabled drivers build config 00:02:15.787 net/af_packet: not in enabled drivers build config 00:02:15.787 net/af_xdp: not in enabled drivers build config 00:02:15.787 net/ark: not in enabled drivers build config 00:02:15.787 net/atlantic: not in enabled drivers build config 00:02:15.787 net/avp: not in enabled drivers build config 00:02:15.787 net/axgbe: not in enabled drivers build config 00:02:15.787 net/bnx2x: not in enabled drivers build config 00:02:15.787 net/bnxt: not in enabled drivers build config 00:02:15.787 net/bonding: not in enabled drivers build config 00:02:15.787 net/cnxk: not in enabled drivers build config 00:02:15.787 net/cpfl: not in enabled drivers build config 00:02:15.787 net/cxgbe: not in enabled drivers build config 00:02:15.787 net/dpaa: not in enabled drivers build config 00:02:15.787 net/dpaa2: not in enabled drivers build config 00:02:15.787 net/e1000: not in enabled drivers build config 00:02:15.787 net/ena: not in enabled drivers build config 00:02:15.787 net/enetc: not in enabled drivers build config 00:02:15.787 net/enetfec: not in enabled drivers build config 00:02:15.787 net/enic: not in enabled drivers build config 00:02:15.787 net/failsafe: not in enabled drivers build config 00:02:15.787 net/fm10k: not in enabled drivers build config 00:02:15.787 net/gve: not in enabled drivers build config 00:02:15.787 net/hinic: not in enabled drivers build config 00:02:15.787 net/hns3: not in enabled drivers build config 00:02:15.787 net/iavf: not in enabled drivers build config 00:02:15.787 net/ice: not in enabled drivers build config 00:02:15.787 net/idpf: not in enabled drivers build config 00:02:15.787 net/igc: not in enabled drivers build config 00:02:15.787 net/ionic: not in enabled drivers build config 00:02:15.787 net/ipn3ke: not in enabled drivers build config 00:02:15.787 net/ixgbe: not in enabled drivers build config 00:02:15.787 net/mana: not in enabled drivers build config 00:02:15.787 net/memif: not in enabled drivers build config 00:02:15.787 net/mlx4: not in enabled drivers build config 00:02:15.787 net/mlx5: not in enabled drivers build config 00:02:15.787 net/mvneta: not in enabled drivers build config 00:02:15.787 net/mvpp2: not in enabled drivers build config 00:02:15.787 net/netvsc: not in enabled drivers build config 00:02:15.787 net/nfb: not in enabled drivers build config 00:02:15.787 net/nfp: not in enabled drivers build config 00:02:15.787 net/ngbe: not in enabled drivers build config 00:02:15.787 net/null: not in enabled drivers build config 00:02:15.787 net/octeontx: not in enabled drivers build config 00:02:15.787 net/octeon_ep: not in enabled drivers build config 00:02:15.787 net/pcap: not in enabled drivers build config 00:02:15.787 net/pfe: not in enabled drivers build config 00:02:15.787 net/qede: not in enabled drivers build config 00:02:15.787 net/ring: not in enabled drivers build config 00:02:15.787 net/sfc: not in enabled drivers build config 00:02:15.787 net/softnic: not in enabled drivers build config 00:02:15.787 net/tap: not in enabled drivers build config 00:02:15.787 net/thunderx: not in enabled drivers build config 00:02:15.787 net/txgbe: not in enabled drivers build config 00:02:15.787 net/vdev_netvsc: not in enabled drivers build config 00:02:15.787 net/vhost: not in enabled drivers build config 00:02:15.787 net/virtio: not in enabled drivers build config 00:02:15.787 net/vmxnet3: not in enabled drivers build config 00:02:15.787 raw/cnxk_bphy: not in enabled drivers build config 00:02:15.787 raw/cnxk_gpio: not in enabled drivers build config 00:02:15.787 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:15.787 raw/ifpga: not in enabled drivers build config 00:02:15.787 raw/ntb: not in enabled drivers build config 00:02:15.787 raw/skeleton: not in enabled drivers build config 00:02:15.787 crypto/armv8: not in enabled drivers build config 00:02:15.787 crypto/bcmfs: not in enabled drivers build config 00:02:15.787 crypto/caam_jr: not in enabled drivers build config 00:02:15.787 crypto/ccp: not in enabled drivers build config 00:02:15.787 crypto/cnxk: not in enabled drivers build config 00:02:15.787 crypto/dpaa_sec: not in enabled drivers build config 00:02:15.787 crypto/dpaa2_sec: not in enabled drivers build config 00:02:15.787 crypto/ipsec_mb: not in enabled drivers build config 00:02:15.787 crypto/mlx5: not in enabled drivers build config 00:02:15.787 crypto/mvsam: not in enabled drivers build config 00:02:15.787 crypto/nitrox: not in enabled drivers build config 00:02:15.787 crypto/null: not in enabled drivers build config 00:02:15.787 crypto/octeontx: not in enabled drivers build config 00:02:15.787 crypto/openssl: not in enabled drivers build config 00:02:15.787 crypto/scheduler: not in enabled drivers build config 00:02:15.787 crypto/uadk: not in enabled drivers build config 00:02:15.787 crypto/virtio: not in enabled drivers build config 00:02:15.787 compress/isal: not in enabled drivers build config 00:02:15.787 compress/mlx5: not in enabled drivers build config 00:02:15.787 compress/octeontx: not in enabled drivers build config 00:02:15.787 compress/zlib: not in enabled drivers build config 00:02:15.787 regex/mlx5: not in enabled drivers build config 00:02:15.787 regex/cn9k: not in enabled drivers build config 00:02:15.787 ml/cnxk: not in enabled drivers build config 00:02:15.787 vdpa/ifc: not in enabled drivers build config 00:02:15.787 vdpa/mlx5: not in enabled drivers build config 00:02:15.787 vdpa/nfp: not in enabled drivers build config 00:02:15.787 vdpa/sfc: not in enabled drivers build config 00:02:15.787 event/cnxk: not in enabled drivers build config 00:02:15.787 event/dlb2: not in enabled drivers build config 00:02:15.787 event/dpaa: not in enabled drivers build config 00:02:15.787 event/dpaa2: not in enabled drivers build config 00:02:15.787 event/dsw: not in enabled drivers build config 00:02:15.787 event/opdl: not in enabled drivers build config 00:02:15.787 event/skeleton: not in enabled drivers build config 00:02:15.787 event/sw: not in enabled drivers build config 00:02:15.787 event/octeontx: not in enabled drivers build config 00:02:15.787 baseband/acc: not in enabled drivers build config 00:02:15.787 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:15.787 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:15.787 baseband/la12xx: not in enabled drivers build config 00:02:15.787 baseband/null: not in enabled drivers build config 00:02:15.787 baseband/turbo_sw: not in enabled drivers build config 00:02:15.787 gpu/cuda: not in enabled drivers build config 00:02:15.787 00:02:15.787 00:02:15.787 Build targets in project: 220 00:02:15.787 00:02:15.787 DPDK 23.11.0 00:02:15.787 00:02:15.787 User defined options 00:02:15.787 libdir : lib 00:02:15.787 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.787 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:15.787 c_link_args : 00:02:15.787 enable_docs : false 00:02:15.787 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:15.787 enable_kmods : false 00:02:15.787 machine : native 00:02:15.787 tests : false 00:02:15.787 00:02:15.787 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.787 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:15.787 22:25:18 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:15.787 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:15.787 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.787 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.787 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.787 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.787 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.787 [6/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:15.787 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.787 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.787 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.787 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.787 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:16.052 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:16.052 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:16.052 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:16.052 [15/710] Linking static target lib/librte_kvargs.a 00:02:16.052 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.052 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:16.052 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.052 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:16.052 [20/710] Linking static target lib/librte_log.a 00:02:16.314 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.576 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.841 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:16.841 [24/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.841 [25/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.841 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.841 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:16.841 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:16.841 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.841 [30/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.841 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:17.110 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:17.110 [33/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:17.110 [34/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:17.110 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:17.110 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:17.110 [37/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.110 [38/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:17.110 [39/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:17.110 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:17.110 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:17.110 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:17.110 [43/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:17.110 [44/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:17.110 [45/710] Linking target lib/librte_log.so.24.0 00:02:17.110 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:17.110 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:17.110 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:17.110 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:17.110 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:17.110 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:17.110 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:17.110 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:17.110 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:17.110 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:17.110 [56/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:17.110 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:17.110 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:17.371 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:17.371 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:17.371 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:17.371 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:17.634 [63/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:17.634 [64/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:17.634 [65/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:17.634 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:17.634 [67/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:17.900 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:17.900 [69/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.900 [70/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.900 [71/710] Linking static target lib/librte_pci.a 00:02:17.900 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:17.900 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:17.900 [74/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.900 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:18.160 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:18.160 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:18.160 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:18.160 [79/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:18.160 [80/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:18.160 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:18.160 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:18.160 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:18.160 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:18.160 [85/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:18.160 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:18.160 [87/710] Linking static target lib/librte_ring.a 00:02:18.160 [88/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.160 [89/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:18.160 [90/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:18.160 [91/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:18.160 [92/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:18.422 [93/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:18.422 [94/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.422 [95/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:18.422 [96/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:18.422 [97/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.422 [98/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.422 [99/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.422 [100/710] Linking static target lib/librte_meter.a 00:02:18.422 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:18.422 [102/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.422 [103/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.422 [104/710] Linking static target lib/librte_telemetry.a 00:02:18.422 [105/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:18.422 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:18.686 [107/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.686 [108/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.686 [109/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.686 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.686 [111/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.686 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.686 [113/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:18.686 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:18.686 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.951 [116/710] Linking static target lib/librte_eal.a 00:02:18.951 [117/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.951 [118/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.951 [119/710] Linking static target lib/librte_net.a 00:02:18.951 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:18.951 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.951 [122/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.951 [123/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.951 [124/710] Linking static target lib/librte_cmdline.a 00:02:19.224 [125/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:19.224 [126/710] Linking static target lib/librte_mempool.a 00:02:19.224 [127/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.224 [128/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:19.224 [129/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.224 [130/710] Linking static target lib/librte_cfgfile.a 00:02:19.224 [131/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.498 [132/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:19.498 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.498 [134/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.498 [135/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:19.498 [136/710] Linking static target lib/librte_metrics.a 00:02:19.498 [137/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:19.498 [138/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.498 [139/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:19.761 [140/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:19.761 [141/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:19.761 [142/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:19.761 [143/710] Linking static target lib/librte_rcu.a 00:02:19.761 [144/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:19.761 [145/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:19.761 [146/710] Linking static target lib/librte_bitratestats.a 00:02:19.761 [147/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:19.761 [148/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.761 [149/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.028 [150/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:20.028 [151/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:20.028 [152/710] Linking target lib/librte_kvargs.so.24.0 00:02:20.028 [153/710] Linking target lib/librte_telemetry.so.24.0 00:02:20.028 [154/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:20.028 [155/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.028 [156/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:20.028 [157/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.028 [158/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.028 [159/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.028 [160/710] Linking static target lib/librte_timer.a 00:02:20.028 [161/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.293 [162/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.293 [163/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:20.293 [164/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:20.293 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:20.293 [166/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.293 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:20.293 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:20.293 [169/710] Linking static target lib/librte_bbdev.a 00:02:20.556 [170/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.556 [171/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.556 [172/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.556 [173/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:20.556 [174/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:20.556 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.556 [176/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.556 [177/710] Linking static target lib/librte_compressdev.a 00:02:20.818 [178/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:20.818 [179/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.818 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:20.818 [181/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:21.085 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:21.085 [183/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:21.085 [184/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:21.350 [185/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:21.350 [186/710] Linking static target lib/librte_distributor.a 00:02:21.350 [187/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:21.350 [188/710] Linking static target lib/librte_bpf.a 00:02:21.350 [189/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.350 [190/710] Linking static target lib/librte_dmadev.a 00:02:21.350 [191/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:21.350 [192/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.350 [193/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.614 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:21.614 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:21.614 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:21.614 [197/710] Linking static target lib/librte_dispatcher.a 00:02:21.614 [198/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:21.614 [199/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:21.614 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:21.614 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:21.614 [202/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.884 [203/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:21.884 [204/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:21.884 [205/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.884 [206/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.884 [207/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:21.884 [208/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:21.884 [209/710] Linking static target lib/librte_gpudev.a 00:02:21.884 [210/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.884 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:21.884 [212/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:21.884 [213/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.884 [214/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:21.884 [215/710] Linking static target lib/librte_gro.a 00:02:21.884 [216/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:22.149 [217/710] Linking static target lib/librte_jobstats.a 00:02:22.149 [218/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:22.149 [219/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.149 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:22.149 [221/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:22.414 [222/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:22.414 [223/710] Linking static target lib/librte_latencystats.a 00:02:22.414 [224/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.414 [225/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.414 [226/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.682 [227/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:22.682 [228/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:22.682 [229/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:22.682 [230/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:22.682 [231/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:22.682 [232/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:22.682 [233/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:22.682 [234/710] Linking static target lib/librte_ip_frag.a 00:02:22.682 [235/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.682 [236/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:22.682 [237/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:22.947 [238/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:22.947 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:22.947 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.947 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:23.213 [242/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:23.213 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:23.213 [244/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:23.213 [245/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.213 [246/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:23.477 [247/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:23.477 [248/710] Linking static target lib/librte_gso.a 00:02:23.477 [249/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.477 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:23.477 [251/710] Linking static target lib/librte_regexdev.a 00:02:23.477 [252/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:23.477 [253/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:23.477 [254/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:23.477 [255/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:23.742 [256/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:23.742 [257/710] Linking static target lib/librte_rawdev.a 00:02:23.742 [258/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:23.742 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.742 [260/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:23.742 [261/710] Linking static target lib/librte_mldev.a 00:02:23.742 [262/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:23.742 [263/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:23.742 [264/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:23.742 [265/710] Linking static target lib/librte_efd.a 00:02:23.742 [266/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:24.007 [267/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:24.007 [268/710] Linking static target lib/librte_pcapng.a 00:02:24.007 [269/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:24.007 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:24.008 [271/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:24.008 [272/710] Linking static target lib/acl/libavx2_tmp.a 00:02:24.008 [273/710] Linking static target lib/librte_stack.a 00:02:24.008 [274/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:24.008 [275/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:24.008 [276/710] Linking static target lib/librte_lpm.a 00:02:24.279 [277/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:24.279 [278/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:24.279 [279/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.279 [280/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.279 [281/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:24.279 [282/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.279 [283/710] Linking static target lib/librte_hash.a 00:02:24.279 [284/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.279 [285/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:24.279 [286/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.543 [287/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:24.543 [288/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:24.543 [289/710] Linking static target lib/acl/libavx512_tmp.a 00:02:24.543 [290/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.543 [291/710] Linking static target lib/librte_reorder.a 00:02:24.543 [292/710] Linking static target lib/librte_acl.a 00:02:24.543 [293/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:24.543 [294/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:24.543 [295/710] Linking static target lib/librte_power.a 00:02:24.543 [296/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.543 [297/710] Linking static target lib/librte_security.a 00:02:24.806 [298/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:24.806 [299/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.806 [300/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.806 [301/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:24.806 [302/710] Linking static target lib/librte_mbuf.a 00:02:24.806 [303/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:25.072 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:25.072 [305/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.072 [306/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:25.072 [307/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.072 [308/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.072 [309/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:25.072 [310/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:25.072 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:25.339 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:25.339 [313/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:25.339 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:25.339 [315/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:25.339 [316/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.339 [317/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.339 [318/710] Linking static target lib/librte_rib.a 00:02:25.606 [319/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:25.606 [320/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:25.606 [321/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:25.606 [322/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:25.606 [323/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:25.606 [324/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:25.606 [325/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:25.606 [326/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.606 [327/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.872 [328/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.872 [329/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:25.872 [330/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.137 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:26.137 [332/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:26.137 [333/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.137 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:26.137 [335/710] Linking static target lib/librte_eventdev.a 00:02:26.401 [336/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:26.401 [337/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:26.401 [338/710] Linking static target lib/librte_member.a 00:02:26.401 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:26.401 [340/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:26.401 [341/710] Linking static target lib/librte_cryptodev.a 00:02:26.668 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:26.668 [343/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:26.668 [344/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:26.668 [345/710] Linking static target lib/librte_ethdev.a 00:02:26.668 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:26.668 [347/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:26.668 [348/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:26.668 [349/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:26.668 [350/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:26.668 [351/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:26.668 [352/710] Linking static target lib/librte_sched.a 00:02:26.668 [353/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:26.668 [354/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:26.668 [355/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:26.668 [356/710] Linking static target lib/librte_fib.a 00:02:26.940 [357/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:26.940 [358/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:26.940 [359/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.940 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:26.940 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:27.206 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:27.206 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:27.206 [364/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:27.206 [365/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:27.206 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:27.206 [367/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:27.471 [368/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.471 [369/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:27.471 [370/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.471 [371/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:27.471 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:27.471 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:27.735 [374/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:27.735 [375/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:27.735 [376/710] Linking static target lib/librte_pdump.a 00:02:27.735 [377/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:27.999 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:27.999 [379/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:27.999 [380/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:27.999 [381/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:27.999 [382/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:27.999 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:27.999 [384/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:28.264 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:28.264 [386/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:28.264 [387/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:28.264 [388/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.264 [389/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:28.264 [390/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:28.264 [391/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:28.527 [392/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:28.527 [393/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:28.527 [394/710] Linking static target lib/librte_table.a 00:02:28.527 [395/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:28.527 [396/710] Linking static target lib/librte_ipsec.a 00:02:28.527 [397/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:28.793 [398/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:28.793 [399/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:28.793 [400/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.793 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:29.056 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.323 [403/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:29.323 [404/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:29.323 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:29.323 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:29.323 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:29.323 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:29.323 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:29.588 [410/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:29.588 [411/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:29.588 [412/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:29.588 [413/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:29.588 [414/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:29.588 [415/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.859 [416/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:29.859 [417/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:29.859 [418/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:29.859 [419/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.125 [420/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:30.125 [421/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:30.125 [422/710] Linking static target lib/librte_port.a 00:02:30.125 [423/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:30.125 [424/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.125 [425/710] Linking static target drivers/librte_bus_vdev.a 00:02:30.125 [426/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:30.125 [427/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.392 [428/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:30.392 [429/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.392 [430/710] Linking static target drivers/librte_bus_pci.a 00:02:30.392 [431/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:30.392 [432/710] Linking static target lib/librte_graph.a 00:02:30.392 [433/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:30.392 [434/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:30.392 [435/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.392 [436/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.392 [437/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:30.392 [438/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.659 [439/710] Linking target lib/librte_eal.so.24.0 00:02:30.659 [440/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:30.659 [441/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:30.659 [442/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:30.929 [443/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:30.929 [444/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:30.929 [445/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:30.929 [446/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:30.929 [447/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.929 [448/710] Linking target lib/librte_ring.so.24.0 00:02:30.929 [449/710] Linking target lib/librte_meter.so.24.0 00:02:31.192 [450/710] Linking target lib/librte_pci.so.24.0 00:02:31.192 [451/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:31.192 [452/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.192 [453/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:31.192 [454/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:31.192 [455/710] Linking target lib/librte_timer.so.24.0 00:02:31.192 [456/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:31.192 [457/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:31.458 [458/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:31.458 [459/710] Linking target lib/librte_rcu.so.24.0 00:02:31.458 [460/710] Linking target lib/librte_acl.so.24.0 00:02:31.458 [461/710] Linking target lib/librte_cfgfile.so.24.0 00:02:31.458 [462/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:31.458 [463/710] Linking target lib/librte_mempool.so.24.0 00:02:31.458 [464/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:31.458 [465/710] Linking target lib/librte_dmadev.so.24.0 00:02:31.458 [466/710] Linking target lib/librte_jobstats.so.24.0 00:02:31.458 [467/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:31.458 [468/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:31.458 [469/710] Linking target lib/librte_stack.so.24.0 00:02:31.458 [470/710] Linking target lib/librte_rawdev.so.24.0 00:02:31.458 [471/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.458 [472/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:31.458 [473/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.458 [474/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:31.458 [475/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:31.458 [476/710] Linking static target drivers/librte_mempool_ring.a 00:02:31.458 [477/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:31.458 [478/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:31.458 [479/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:31.458 [480/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.458 [481/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:31.458 [482/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:31.458 [483/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:31.458 [484/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:31.723 [485/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:31.723 [486/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:31.723 [487/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:31.723 [488/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:31.723 [489/710] Linking target lib/librte_rib.so.24.0 00:02:31.723 [490/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:31.723 [491/710] Linking target lib/librte_mbuf.so.24.0 00:02:31.723 [492/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:31.723 [493/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:31.723 [494/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:31.723 [495/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:31.723 [496/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:31.723 [497/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:31.986 [498/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:31.986 [499/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:31.986 [500/710] Linking target lib/librte_fib.so.24.0 00:02:31.987 [501/710] Linking target lib/librte_net.so.24.0 00:02:31.987 [502/710] Linking target lib/librte_bbdev.so.24.0 00:02:32.253 [503/710] Linking target lib/librte_compressdev.so.24.0 00:02:32.253 [504/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:32.253 [505/710] Linking target lib/librte_distributor.so.24.0 00:02:32.253 [506/710] Linking target lib/librte_cryptodev.so.24.0 00:02:32.253 [507/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:32.253 [508/710] Linking target lib/librte_gpudev.so.24.0 00:02:32.522 [509/710] Linking target lib/librte_cmdline.so.24.0 00:02:32.522 [510/710] Linking target lib/librte_hash.so.24.0 00:02:32.522 [511/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:32.522 [512/710] Linking target lib/librte_regexdev.so.24.0 00:02:32.522 [513/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:32.522 [514/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:32.522 [515/710] Linking target lib/librte_mldev.so.24.0 00:02:32.522 [516/710] Linking target lib/librte_reorder.so.24.0 00:02:32.522 [517/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:32.522 [518/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:32.522 [519/710] Linking target lib/librte_sched.so.24.0 00:02:32.785 [520/710] Linking target lib/librte_security.so.24.0 00:02:32.785 [521/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:32.785 [522/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:32.785 [523/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:32.785 [524/710] Linking target lib/librte_efd.so.24.0 00:02:32.785 [525/710] Linking target lib/librte_lpm.so.24.0 00:02:32.785 [526/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:32.785 [527/710] Linking target lib/librte_member.so.24.0 00:02:32.785 [528/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:32.785 [529/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:32.785 [530/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:32.785 [531/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:33.053 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:33.053 [533/710] Linking target lib/librte_ipsec.so.24.0 00:02:33.053 [534/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:33.053 [535/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:33.053 [536/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:33.053 [537/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:33.053 [538/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:33.053 [539/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:33.053 [540/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:33.053 [541/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:33.315 [542/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:33.315 [543/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:33.583 [544/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:33.583 [545/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:33.583 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:33.583 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:33.583 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:33.583 [549/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:33.848 [550/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:33.848 [551/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:33.848 [552/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:33.848 [553/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:34.110 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:34.110 [555/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:34.376 [556/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:34.376 [557/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:34.376 [558/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:34.376 [559/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:34.640 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:34.905 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:34.905 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:34.905 [563/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:34.905 [564/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:35.173 [565/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:35.173 [566/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:35.173 [567/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:35.173 [568/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:35.173 [569/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:35.439 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:35.439 [571/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:35.439 [572/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:35.704 [573/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:35.704 [574/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:35.704 [575/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:35.704 [576/710] Linking static target lib/librte_pdcp.a 00:02:35.704 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:35.704 [578/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:35.704 [579/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:35.704 [580/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.704 [581/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:35.963 [582/710] Linking target lib/librte_ethdev.so.24.0 00:02:35.963 [583/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:35.963 [584/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:35.963 [585/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:35.963 [586/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:36.225 [587/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:36.226 [588/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:36.226 [589/710] Linking target lib/librte_metrics.so.24.0 00:02:36.226 [590/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:36.226 [591/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.226 [592/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:36.226 [593/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:36.226 [594/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:36.493 [595/710] Linking target lib/librte_bpf.so.24.0 00:02:36.493 [596/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:36.493 [597/710] Linking target lib/librte_gro.so.24.0 00:02:36.493 [598/710] Linking target lib/librte_eventdev.so.24.0 00:02:36.493 [599/710] Linking target lib/librte_gso.so.24.0 00:02:36.493 [600/710] Linking target lib/librte_ip_frag.so.24.0 00:02:36.493 [601/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:36.493 [602/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:36.493 [603/710] Linking target lib/librte_pcapng.so.24.0 00:02:36.493 [604/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:36.493 [605/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:36.493 [606/710] Linking target lib/librte_pdcp.so.24.0 00:02:36.493 [607/710] Linking target lib/librte_power.so.24.0 00:02:36.493 [608/710] Linking target lib/librte_bitratestats.so.24.0 00:02:36.493 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:36.758 [610/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:36.758 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:36.758 [612/710] Linking target lib/librte_latencystats.so.24.0 00:02:36.758 [613/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:36.758 [614/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:36.758 [615/710] Linking target lib/librte_dispatcher.so.24.0 00:02:36.758 [616/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:36.758 [617/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:36.758 [618/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:36.758 [619/710] Linking target lib/librte_port.so.24.0 00:02:37.021 [620/710] Linking target lib/librte_pdump.so.24.0 00:02:37.021 [621/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:37.021 [622/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:37.021 [623/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:37.021 [624/710] Linking target lib/librte_graph.so.24.0 00:02:37.021 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:37.021 [626/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:37.289 [627/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:37.289 [628/710] Linking target lib/librte_table.so.24.0 00:02:37.289 [629/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:37.549 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:37.549 [631/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:37.549 [632/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:37.549 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:37.549 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:37.549 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:37.810 [636/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:37.810 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:37.810 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:37.810 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:38.069 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:38.069 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:38.069 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:38.069 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:38.069 [644/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:38.069 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:38.329 [646/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:38.329 [647/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:38.329 [648/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:38.329 [649/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:38.588 [650/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:38.588 [651/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:38.588 [652/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:38.588 [653/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:38.588 [654/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:38.588 [655/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:38.847 [656/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:38.847 [657/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:39.107 [658/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:39.107 [659/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:39.107 [660/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:39.107 [661/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:39.107 [662/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:39.366 [663/710] Linking static target drivers/librte_net_i40e.a 00:02:39.366 [664/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:39.625 [665/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:39.625 [666/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:39.884 [667/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.884 [668/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:39.884 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:40.143 [670/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:40.401 [671/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:40.401 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:40.661 [673/710] Linking static target lib/librte_node.a 00:02:40.661 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:40.920 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.920 [676/710] Linking target lib/librte_node.so.24.0 00:02:41.857 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:42.117 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:42.117 [679/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:43.494 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:44.431 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:49.704 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:21.808 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:21.808 [684/710] Linking static target lib/librte_vhost.a 00:03:22.067 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.327 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:32.329 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:32.329 [688/710] Linking static target lib/librte_pipeline.a 00:03:32.589 [689/710] Linking target app/dpdk-test-acl 00:03:32.589 [690/710] Linking target app/dpdk-test-cmdline 00:03:32.589 [691/710] Linking target app/dpdk-proc-info 00:03:32.589 [692/710] Linking target app/dpdk-pdump 00:03:32.589 [693/710] Linking target app/dpdk-test-dma-perf 00:03:32.589 [694/710] Linking target app/dpdk-test-sad 00:03:32.589 [695/710] Linking target app/dpdk-test-gpudev 00:03:32.589 [696/710] Linking target app/dpdk-test-fib 00:03:32.589 [697/710] Linking target app/dpdk-graph 00:03:32.589 [698/710] Linking target app/dpdk-test-regex 00:03:32.589 [699/710] Linking target app/dpdk-dumpcap 00:03:32.589 [700/710] Linking target app/dpdk-test-pipeline 00:03:32.589 [701/710] Linking target app/dpdk-test-compress-perf 00:03:32.590 [702/710] Linking target app/dpdk-test-security-perf 00:03:32.590 [703/710] Linking target app/dpdk-test-flow-perf 00:03:32.590 [704/710] Linking target app/dpdk-test-bbdev 00:03:32.590 [705/710] Linking target app/dpdk-test-mldev 00:03:32.590 [706/710] Linking target app/dpdk-test-crypto-perf 00:03:32.590 [707/710] Linking target app/dpdk-test-eventdev 00:03:32.590 [708/710] Linking target app/dpdk-testpmd 00:03:35.133 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.133 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:35.133 22:26:37 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:35.133 22:26:37 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:35.133 22:26:37 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:35.133 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:35.133 [0/1] Installing files. 00:03:35.133 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.133 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.134 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:35.135 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:35.136 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:35.136 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:35.136 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:35.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:35.401 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.401 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:35.974 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:35.974 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.974 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:35.975 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.975 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:35.975 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:35.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:35.978 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:35.978 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:35.978 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:35.978 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:35.979 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:35.979 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:35.979 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:35.979 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:35.979 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:35.979 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:35.979 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:35.979 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:35.979 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:35.979 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:35.979 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:35.979 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:35.979 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:35.979 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:35.979 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:35.979 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:35.979 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:35.979 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:35.979 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:35.979 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:35.979 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:35.979 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:35.979 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:35.979 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:35.979 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:35.979 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:35.979 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:35.979 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:35.979 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:35.979 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:35.979 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:35.979 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:35.979 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:35.979 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:35.979 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:35.979 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:35.979 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:35.979 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:35.979 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:35.979 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:35.979 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:35.979 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:35.979 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:35.979 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:35.979 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:35.979 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:35.979 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:35.979 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:35.979 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:35.979 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:35.979 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:35.979 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:35.979 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:35.979 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:35.979 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:35.979 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:35.979 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:35.979 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:35.979 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:35.979 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:35.979 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:35.979 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:35.979 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:35.979 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:35.979 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:35.979 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:35.979 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:35.979 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:35.979 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:35.979 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:35.979 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:35.979 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:35.979 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:35.979 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:35.979 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:35.979 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:35.979 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:35.979 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:35.979 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:35.979 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:35.979 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:35.979 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:35.979 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:35.979 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:35.979 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:35.979 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:35.979 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:35.979 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:35.979 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:35.979 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:35.979 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:35.979 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:35.979 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:35.979 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:35.979 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:35.979 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:35.979 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:35.979 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:35.979 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:35.979 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:35.979 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:35.979 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:35.979 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:35.980 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:35.980 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:35.980 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:35.980 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:35.980 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:35.980 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:35.980 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:35.980 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:35.980 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:35.980 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:35.980 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:35.980 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:35.980 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:35.980 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:35.980 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:35.980 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:35.980 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:35.980 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:35.980 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:35.980 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:35.980 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:35.980 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:35.980 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:35.980 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:35.980 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:35.980 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:36.239 22:26:39 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:36.239 22:26:39 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.239 00:03:36.239 real 1m28.461s 00:03:36.239 user 18m4.508s 00:03:36.239 sys 2m11.239s 00:03:36.239 22:26:39 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:36.239 22:26:39 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:36.239 ************************************ 00:03:36.239 END TEST build_native_dpdk 00:03:36.239 ************************************ 00:03:36.239 22:26:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:36.239 22:26:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:36.239 22:26:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:36.239 22:26:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:36.239 22:26:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:36.239 22:26:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:36.239 22:26:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:36.239 22:26:39 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:36.239 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:36.239 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:36.239 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:36.498 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:36.759 Using 'verbs' RDMA provider 00:03:47.323 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:57.311 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:57.311 Creating mk/config.mk...done. 00:03:57.311 Creating mk/cc.flags.mk...done. 00:03:57.311 Type 'make' to build. 00:03:57.311 22:27:00 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:57.311 22:27:00 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:57.311 22:27:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:57.311 22:27:00 -- common/autotest_common.sh@10 -- $ set +x 00:03:57.311 ************************************ 00:03:57.311 START TEST make 00:03:57.311 ************************************ 00:03:57.311 22:27:00 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:57.311 make[1]: Nothing to be done for 'all'. 00:03:59.241 The Meson build system 00:03:59.241 Version: 1.5.0 00:03:59.241 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:59.241 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:59.241 Build type: native build 00:03:59.241 Project name: libvfio-user 00:03:59.241 Project version: 0.0.1 00:03:59.241 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:59.241 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:59.241 Host machine cpu family: x86_64 00:03:59.241 Host machine cpu: x86_64 00:03:59.241 Run-time dependency threads found: YES 00:03:59.241 Library dl found: YES 00:03:59.241 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:59.241 Run-time dependency json-c found: YES 0.17 00:03:59.241 Run-time dependency cmocka found: YES 1.1.7 00:03:59.241 Program pytest-3 found: NO 00:03:59.241 Program flake8 found: NO 00:03:59.241 Program misspell-fixer found: NO 00:03:59.241 Program restructuredtext-lint found: NO 00:03:59.241 Program valgrind found: YES (/usr/bin/valgrind) 00:03:59.241 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:59.241 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:59.241 Compiler for C supports arguments -Wwrite-strings: YES 00:03:59.241 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:59.241 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:59.241 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:59.241 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:59.241 Build targets in project: 8 00:03:59.241 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:59.241 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:59.241 00:03:59.241 libvfio-user 0.0.1 00:03:59.241 00:03:59.241 User defined options 00:03:59.241 buildtype : debug 00:03:59.241 default_library: shared 00:03:59.241 libdir : /usr/local/lib 00:03:59.241 00:03:59.241 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:00.193 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:00.193 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:00.193 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:00.193 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:00.193 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:00.193 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:00.193 [6/37] Compiling C object samples/null.p/null.c.o 00:04:00.193 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:00.193 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:00.193 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:00.193 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:00.193 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:00.193 [12/37] Compiling C object samples/server.p/server.c.o 00:04:00.193 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:00.193 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:00.193 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:00.193 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:00.193 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:00.193 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:00.193 [19/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:00.193 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:00.193 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:00.193 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:00.193 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:00.458 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:00.458 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:00.458 [26/37] Compiling C object samples/client.p/client.c.o 00:04:00.458 [27/37] Linking target samples/client 00:04:00.458 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:00.458 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:00.458 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:00.725 [31/37] Linking target test/unit_tests 00:04:00.725 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:00.725 [33/37] Linking target samples/server 00:04:00.725 [34/37] Linking target samples/gpio-pci-idio-16 00:04:00.725 [35/37] Linking target samples/null 00:04:00.725 [36/37] Linking target samples/shadow_ioeventfd_server 00:04:00.725 [37/37] Linking target samples/lspci 00:04:00.725 INFO: autodetecting backend as ninja 00:04:00.725 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:00.989 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:01.931 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:01.931 ninja: no work to do. 00:04:40.649 CC lib/log/log.o 00:04:40.649 CC lib/log/log_flags.o 00:04:40.649 CC lib/log/log_deprecated.o 00:04:40.649 CC lib/ut/ut.o 00:04:40.649 CC lib/ut_mock/mock.o 00:04:40.649 LIB libspdk_ut.a 00:04:40.649 LIB libspdk_ut_mock.a 00:04:40.649 LIB libspdk_log.a 00:04:40.649 SO libspdk_ut.so.2.0 00:04:40.649 SO libspdk_ut_mock.so.6.0 00:04:40.649 SO libspdk_log.so.7.1 00:04:40.649 SYMLINK libspdk_ut_mock.so 00:04:40.649 SYMLINK libspdk_ut.so 00:04:40.649 SYMLINK libspdk_log.so 00:04:40.649 CC lib/dma/dma.o 00:04:40.649 CC lib/ioat/ioat.o 00:04:40.649 CXX lib/trace_parser/trace.o 00:04:40.649 CC lib/util/base64.o 00:04:40.649 CC lib/util/bit_array.o 00:04:40.649 CC lib/util/cpuset.o 00:04:40.649 CC lib/util/crc16.o 00:04:40.649 CC lib/util/crc32.o 00:04:40.649 CC lib/util/crc32c.o 00:04:40.649 CC lib/util/crc32_ieee.o 00:04:40.649 CC lib/util/crc64.o 00:04:40.649 CC lib/util/dif.o 00:04:40.649 CC lib/util/fd.o 00:04:40.649 CC lib/util/fd_group.o 00:04:40.649 CC lib/util/file.o 00:04:40.649 CC lib/util/hexlify.o 00:04:40.649 CC lib/util/iov.o 00:04:40.649 CC lib/util/math.o 00:04:40.649 CC lib/util/net.o 00:04:40.649 CC lib/util/pipe.o 00:04:40.649 CC lib/util/strerror_tls.o 00:04:40.649 CC lib/util/string.o 00:04:40.649 CC lib/util/uuid.o 00:04:40.649 CC lib/util/xor.o 00:04:40.649 CC lib/util/zipf.o 00:04:40.649 CC lib/util/md5.o 00:04:40.649 CC lib/vfio_user/host/vfio_user_pci.o 00:04:40.649 CC lib/vfio_user/host/vfio_user.o 00:04:40.649 LIB libspdk_dma.a 00:04:40.649 SO libspdk_dma.so.5.0 00:04:40.649 SYMLINK libspdk_dma.so 00:04:40.649 LIB libspdk_ioat.a 00:04:40.649 LIB libspdk_vfio_user.a 00:04:40.649 SO libspdk_ioat.so.7.0 00:04:40.649 SO libspdk_vfio_user.so.5.0 00:04:40.649 SYMLINK libspdk_ioat.so 00:04:40.649 SYMLINK libspdk_vfio_user.so 00:04:40.649 LIB libspdk_util.a 00:04:40.649 SO libspdk_util.so.10.0 00:04:40.649 SYMLINK libspdk_util.so 00:04:40.649 CC lib/idxd/idxd.o 00:04:40.649 CC lib/conf/conf.o 00:04:40.649 CC lib/vmd/vmd.o 00:04:40.649 CC lib/idxd/idxd_user.o 00:04:40.649 CC lib/env_dpdk/env.o 00:04:40.649 CC lib/vmd/led.o 00:04:40.649 CC lib/idxd/idxd_kernel.o 00:04:40.649 CC lib/env_dpdk/memory.o 00:04:40.649 CC lib/json/json_parse.o 00:04:40.649 CC lib/rdma_utils/rdma_utils.o 00:04:40.649 CC lib/rdma_provider/common.o 00:04:40.649 CC lib/env_dpdk/pci.o 00:04:40.649 CC lib/json/json_util.o 00:04:40.649 CC lib/env_dpdk/init.o 00:04:40.649 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:40.649 CC lib/json/json_write.o 00:04:40.649 CC lib/env_dpdk/threads.o 00:04:40.649 CC lib/env_dpdk/pci_ioat.o 00:04:40.649 CC lib/env_dpdk/pci_virtio.o 00:04:40.649 CC lib/env_dpdk/pci_vmd.o 00:04:40.649 CC lib/env_dpdk/pci_idxd.o 00:04:40.649 CC lib/env_dpdk/pci_event.o 00:04:40.649 CC lib/env_dpdk/sigbus_handler.o 00:04:40.649 CC lib/env_dpdk/pci_dpdk.o 00:04:40.649 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:40.649 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:40.649 LIB libspdk_rdma_provider.a 00:04:40.649 SO libspdk_rdma_provider.so.6.0 00:04:40.649 LIB libspdk_conf.a 00:04:40.649 SO libspdk_conf.so.6.0 00:04:40.649 SYMLINK libspdk_rdma_provider.so 00:04:40.649 SYMLINK libspdk_conf.so 00:04:40.649 LIB libspdk_json.a 00:04:40.649 SO libspdk_json.so.6.0 00:04:40.649 LIB libspdk_rdma_utils.a 00:04:40.649 SO libspdk_rdma_utils.so.1.0 00:04:40.649 SYMLINK libspdk_json.so 00:04:40.649 SYMLINK libspdk_rdma_utils.so 00:04:40.649 CC lib/jsonrpc/jsonrpc_server.o 00:04:40.649 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:40.649 CC lib/jsonrpc/jsonrpc_client.o 00:04:40.649 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:40.649 LIB libspdk_idxd.a 00:04:40.649 SO libspdk_idxd.so.12.1 00:04:40.649 LIB libspdk_vmd.a 00:04:40.649 SYMLINK libspdk_idxd.so 00:04:40.649 SO libspdk_vmd.so.6.0 00:04:40.649 SYMLINK libspdk_vmd.so 00:04:40.649 LIB libspdk_jsonrpc.a 00:04:40.649 SO libspdk_jsonrpc.so.6.0 00:04:40.649 LIB libspdk_trace_parser.a 00:04:40.649 SO libspdk_trace_parser.so.6.0 00:04:40.649 SYMLINK libspdk_jsonrpc.so 00:04:40.649 SYMLINK libspdk_trace_parser.so 00:04:40.649 CC lib/rpc/rpc.o 00:04:40.649 LIB libspdk_rpc.a 00:04:40.908 SO libspdk_rpc.so.6.0 00:04:40.908 SYMLINK libspdk_rpc.so 00:04:40.908 CC lib/notify/notify.o 00:04:40.908 CC lib/keyring/keyring.o 00:04:40.908 CC lib/trace/trace.o 00:04:40.908 CC lib/trace/trace_flags.o 00:04:40.908 CC lib/trace/trace_rpc.o 00:04:40.908 CC lib/keyring/keyring_rpc.o 00:04:40.908 CC lib/notify/notify_rpc.o 00:04:41.167 LIB libspdk_notify.a 00:04:41.167 SO libspdk_notify.so.6.0 00:04:41.167 SYMLINK libspdk_notify.so 00:04:41.167 LIB libspdk_keyring.a 00:04:41.167 LIB libspdk_trace.a 00:04:41.167 SO libspdk_keyring.so.2.0 00:04:41.426 SO libspdk_trace.so.11.0 00:04:41.426 SYMLINK libspdk_keyring.so 00:04:41.426 SYMLINK libspdk_trace.so 00:04:41.426 CC lib/sock/sock.o 00:04:41.426 CC lib/sock/sock_rpc.o 00:04:41.426 CC lib/thread/thread.o 00:04:41.426 CC lib/thread/iobuf.o 00:04:41.687 LIB libspdk_env_dpdk.a 00:04:41.687 SO libspdk_env_dpdk.so.15.0 00:04:41.687 SYMLINK libspdk_env_dpdk.so 00:04:41.947 LIB libspdk_sock.a 00:04:41.947 SO libspdk_sock.so.10.0 00:04:41.947 SYMLINK libspdk_sock.so 00:04:42.206 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:42.206 CC lib/nvme/nvme_ctrlr.o 00:04:42.206 CC lib/nvme/nvme_fabric.o 00:04:42.206 CC lib/nvme/nvme_ns_cmd.o 00:04:42.206 CC lib/nvme/nvme_ns.o 00:04:42.206 CC lib/nvme/nvme_pcie_common.o 00:04:42.206 CC lib/nvme/nvme_pcie.o 00:04:42.206 CC lib/nvme/nvme_qpair.o 00:04:42.206 CC lib/nvme/nvme.o 00:04:42.206 CC lib/nvme/nvme_quirks.o 00:04:42.206 CC lib/nvme/nvme_transport.o 00:04:42.206 CC lib/nvme/nvme_discovery.o 00:04:42.206 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:42.206 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:42.206 CC lib/nvme/nvme_tcp.o 00:04:42.206 CC lib/nvme/nvme_opal.o 00:04:42.206 CC lib/nvme/nvme_io_msg.o 00:04:42.206 CC lib/nvme/nvme_poll_group.o 00:04:42.206 CC lib/nvme/nvme_zns.o 00:04:42.206 CC lib/nvme/nvme_stubs.o 00:04:42.206 CC lib/nvme/nvme_auth.o 00:04:42.206 CC lib/nvme/nvme_cuse.o 00:04:42.206 CC lib/nvme/nvme_vfio_user.o 00:04:42.206 CC lib/nvme/nvme_rdma.o 00:04:43.156 LIB libspdk_thread.a 00:04:43.156 SO libspdk_thread.so.10.2 00:04:43.156 SYMLINK libspdk_thread.so 00:04:43.415 CC lib/accel/accel.o 00:04:43.415 CC lib/accel/accel_rpc.o 00:04:43.415 CC lib/accel/accel_sw.o 00:04:43.415 CC lib/fsdev/fsdev.o 00:04:43.415 CC lib/fsdev/fsdev_io.o 00:04:43.415 CC lib/fsdev/fsdev_rpc.o 00:04:43.415 CC lib/vfu_tgt/tgt_endpoint.o 00:04:43.415 CC lib/blob/blobstore.o 00:04:43.415 CC lib/virtio/virtio.o 00:04:43.415 CC lib/vfu_tgt/tgt_rpc.o 00:04:43.415 CC lib/init/json_config.o 00:04:43.415 CC lib/blob/request.o 00:04:43.415 CC lib/virtio/virtio_vhost_user.o 00:04:43.415 CC lib/init/subsystem.o 00:04:43.415 CC lib/virtio/virtio_vfio_user.o 00:04:43.415 CC lib/init/subsystem_rpc.o 00:04:43.415 CC lib/blob/zeroes.o 00:04:43.415 CC lib/blob/blob_bs_dev.o 00:04:43.415 CC lib/init/rpc.o 00:04:43.415 CC lib/virtio/virtio_pci.o 00:04:43.675 LIB libspdk_init.a 00:04:43.675 LIB libspdk_vfu_tgt.a 00:04:43.675 LIB libspdk_virtio.a 00:04:43.675 SO libspdk_vfu_tgt.so.3.0 00:04:43.675 SO libspdk_init.so.6.0 00:04:43.675 SO libspdk_virtio.so.7.0 00:04:43.934 SYMLINK libspdk_vfu_tgt.so 00:04:43.934 SYMLINK libspdk_init.so 00:04:43.934 SYMLINK libspdk_virtio.so 00:04:43.934 CC lib/event/app.o 00:04:43.934 CC lib/event/reactor.o 00:04:43.934 CC lib/event/log_rpc.o 00:04:43.934 CC lib/event/app_rpc.o 00:04:43.934 CC lib/event/scheduler_static.o 00:04:44.193 LIB libspdk_fsdev.a 00:04:44.193 SO libspdk_fsdev.so.1.0 00:04:44.193 SYMLINK libspdk_fsdev.so 00:04:44.451 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:44.451 LIB libspdk_event.a 00:04:44.451 SO libspdk_event.so.15.0 00:04:44.451 SYMLINK libspdk_event.so 00:04:44.451 LIB libspdk_accel.a 00:04:44.710 SO libspdk_accel.so.16.0 00:04:44.710 LIB libspdk_nvme.a 00:04:44.710 SYMLINK libspdk_accel.so 00:04:44.710 SO libspdk_nvme.so.14.0 00:04:44.710 CC lib/bdev/bdev.o 00:04:44.710 CC lib/bdev/bdev_rpc.o 00:04:44.710 CC lib/bdev/bdev_zone.o 00:04:44.710 CC lib/bdev/part.o 00:04:44.710 CC lib/bdev/scsi_nvme.o 00:04:44.968 SYMLINK libspdk_nvme.so 00:04:44.968 LIB libspdk_fuse_dispatcher.a 00:04:44.968 SO libspdk_fuse_dispatcher.so.1.0 00:04:44.968 SYMLINK libspdk_fuse_dispatcher.so 00:04:46.348 LIB libspdk_blob.a 00:04:46.607 SO libspdk_blob.so.11.0 00:04:46.607 SYMLINK libspdk_blob.so 00:04:46.607 CC lib/lvol/lvol.o 00:04:46.607 CC lib/blobfs/blobfs.o 00:04:46.607 CC lib/blobfs/tree.o 00:04:47.544 LIB libspdk_bdev.a 00:04:47.544 SO libspdk_bdev.so.17.0 00:04:47.544 SYMLINK libspdk_bdev.so 00:04:47.544 LIB libspdk_blobfs.a 00:04:47.544 SO libspdk_blobfs.so.10.0 00:04:47.544 SYMLINK libspdk_blobfs.so 00:04:47.544 LIB libspdk_lvol.a 00:04:47.814 SO libspdk_lvol.so.10.0 00:04:47.814 CC lib/nbd/nbd.o 00:04:47.814 CC lib/nbd/nbd_rpc.o 00:04:47.814 CC lib/ublk/ublk.o 00:04:47.814 CC lib/ublk/ublk_rpc.o 00:04:47.814 CC lib/scsi/dev.o 00:04:47.814 CC lib/nvmf/ctrlr.o 00:04:47.814 CC lib/scsi/lun.o 00:04:47.814 CC lib/nvmf/ctrlr_discovery.o 00:04:47.814 CC lib/ftl/ftl_core.o 00:04:47.814 CC lib/scsi/port.o 00:04:47.814 CC lib/nvmf/ctrlr_bdev.o 00:04:47.814 CC lib/ftl/ftl_init.o 00:04:47.814 CC lib/scsi/scsi.o 00:04:47.814 CC lib/nvmf/subsystem.o 00:04:47.814 CC lib/scsi/scsi_bdev.o 00:04:47.814 CC lib/nvmf/nvmf.o 00:04:47.814 CC lib/ftl/ftl_layout.o 00:04:47.814 CC lib/scsi/scsi_pr.o 00:04:47.814 CC lib/nvmf/nvmf_rpc.o 00:04:47.814 CC lib/ftl/ftl_debug.o 00:04:47.814 CC lib/ftl/ftl_io.o 00:04:47.814 CC lib/scsi/scsi_rpc.o 00:04:47.814 CC lib/nvmf/transport.o 00:04:47.814 CC lib/ftl/ftl_sb.o 00:04:47.814 CC lib/nvmf/tcp.o 00:04:47.814 CC lib/ftl/ftl_l2p.o 00:04:47.814 CC lib/nvmf/stubs.o 00:04:47.814 CC lib/scsi/task.o 00:04:47.814 CC lib/ftl/ftl_l2p_flat.o 00:04:47.814 CC lib/ftl/ftl_nv_cache.o 00:04:47.814 CC lib/nvmf/mdns_server.o 00:04:47.814 CC lib/ftl/ftl_band.o 00:04:47.814 CC lib/nvmf/vfio_user.o 00:04:47.814 CC lib/nvmf/rdma.o 00:04:47.814 CC lib/ftl/ftl_band_ops.o 00:04:47.814 CC lib/nvmf/auth.o 00:04:47.814 CC lib/ftl/ftl_writer.o 00:04:47.814 CC lib/ftl/ftl_rq.o 00:04:47.814 CC lib/ftl/ftl_reloc.o 00:04:47.814 CC lib/ftl/ftl_l2p_cache.o 00:04:47.814 CC lib/ftl/ftl_p2l.o 00:04:47.814 CC lib/ftl/ftl_p2l_log.o 00:04:47.814 CC lib/ftl/mngt/ftl_mngt.o 00:04:47.814 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:47.814 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:47.814 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:47.814 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:47.814 SYMLINK libspdk_lvol.so 00:04:47.814 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:48.077 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:48.077 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:48.077 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:48.077 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:48.077 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:48.077 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:48.078 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:48.078 CC lib/ftl/utils/ftl_conf.o 00:04:48.078 CC lib/ftl/utils/ftl_md.o 00:04:48.078 CC lib/ftl/utils/ftl_mempool.o 00:04:48.078 CC lib/ftl/utils/ftl_bitmap.o 00:04:48.342 CC lib/ftl/utils/ftl_property.o 00:04:48.342 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:48.342 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:48.342 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:48.342 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:48.342 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:48.342 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:48.342 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:48.342 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:48.342 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:48.342 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:48.342 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:48.342 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:48.342 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:48.342 CC lib/ftl/base/ftl_base_dev.o 00:04:48.602 CC lib/ftl/base/ftl_base_bdev.o 00:04:48.602 CC lib/ftl/ftl_trace.o 00:04:48.602 LIB libspdk_nbd.a 00:04:48.602 SO libspdk_nbd.so.7.0 00:04:48.602 SYMLINK libspdk_nbd.so 00:04:48.602 LIB libspdk_scsi.a 00:04:48.860 SO libspdk_scsi.so.9.0 00:04:48.860 SYMLINK libspdk_scsi.so 00:04:48.860 LIB libspdk_ublk.a 00:04:48.860 SO libspdk_ublk.so.3.0 00:04:49.119 SYMLINK libspdk_ublk.so 00:04:49.119 CC lib/iscsi/conn.o 00:04:49.119 CC lib/vhost/vhost.o 00:04:49.119 CC lib/iscsi/init_grp.o 00:04:49.119 CC lib/vhost/vhost_rpc.o 00:04:49.119 CC lib/iscsi/iscsi.o 00:04:49.119 CC lib/vhost/vhost_scsi.o 00:04:49.119 CC lib/vhost/vhost_blk.o 00:04:49.119 CC lib/iscsi/param.o 00:04:49.119 CC lib/vhost/rte_vhost_user.o 00:04:49.119 CC lib/iscsi/portal_grp.o 00:04:49.119 CC lib/iscsi/tgt_node.o 00:04:49.119 CC lib/iscsi/iscsi_subsystem.o 00:04:49.119 CC lib/iscsi/iscsi_rpc.o 00:04:49.119 CC lib/iscsi/task.o 00:04:49.378 LIB libspdk_ftl.a 00:04:49.378 SO libspdk_ftl.so.9.0 00:04:49.637 SYMLINK libspdk_ftl.so 00:04:50.204 LIB libspdk_vhost.a 00:04:50.204 SO libspdk_vhost.so.8.0 00:04:50.204 LIB libspdk_nvmf.a 00:04:50.463 SYMLINK libspdk_vhost.so 00:04:50.463 SO libspdk_nvmf.so.19.0 00:04:50.463 LIB libspdk_iscsi.a 00:04:50.463 SO libspdk_iscsi.so.8.0 00:04:50.463 SYMLINK libspdk_nvmf.so 00:04:50.724 SYMLINK libspdk_iscsi.so 00:04:50.983 CC module/env_dpdk/env_dpdk_rpc.o 00:04:50.983 CC module/vfu_device/vfu_virtio.o 00:04:50.983 CC module/vfu_device/vfu_virtio_blk.o 00:04:50.983 CC module/vfu_device/vfu_virtio_scsi.o 00:04:50.983 CC module/vfu_device/vfu_virtio_rpc.o 00:04:50.983 CC module/vfu_device/vfu_virtio_fs.o 00:04:50.983 CC module/keyring/linux/keyring.o 00:04:50.983 CC module/keyring/linux/keyring_rpc.o 00:04:50.983 CC module/keyring/file/keyring_rpc.o 00:04:50.983 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:50.983 CC module/keyring/file/keyring.o 00:04:50.983 CC module/sock/posix/posix.o 00:04:50.983 CC module/accel/ioat/accel_ioat.o 00:04:50.983 CC module/fsdev/aio/fsdev_aio.o 00:04:50.983 CC module/scheduler/gscheduler/gscheduler.o 00:04:50.983 CC module/accel/iaa/accel_iaa.o 00:04:50.983 CC module/accel/error/accel_error.o 00:04:50.983 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:50.983 CC module/accel/ioat/accel_ioat_rpc.o 00:04:50.983 CC module/accel/iaa/accel_iaa_rpc.o 00:04:50.983 CC module/accel/error/accel_error_rpc.o 00:04:50.983 CC module/accel/dsa/accel_dsa.o 00:04:50.983 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:50.983 CC module/fsdev/aio/linux_aio_mgr.o 00:04:50.983 CC module/blob/bdev/blob_bdev.o 00:04:50.983 CC module/accel/dsa/accel_dsa_rpc.o 00:04:51.243 LIB libspdk_env_dpdk_rpc.a 00:04:51.243 SO libspdk_env_dpdk_rpc.so.6.0 00:04:51.243 SYMLINK libspdk_env_dpdk_rpc.so 00:04:51.243 LIB libspdk_keyring_linux.a 00:04:51.243 LIB libspdk_keyring_file.a 00:04:51.243 LIB libspdk_scheduler_dpdk_governor.a 00:04:51.243 LIB libspdk_scheduler_gscheduler.a 00:04:51.243 SO libspdk_keyring_linux.so.1.0 00:04:51.243 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:51.243 SO libspdk_keyring_file.so.2.0 00:04:51.243 SO libspdk_scheduler_gscheduler.so.4.0 00:04:51.243 LIB libspdk_accel_error.a 00:04:51.243 SO libspdk_accel_error.so.2.0 00:04:51.243 LIB libspdk_scheduler_dynamic.a 00:04:51.243 LIB libspdk_accel_iaa.a 00:04:51.243 LIB libspdk_accel_ioat.a 00:04:51.243 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:51.243 SYMLINK libspdk_scheduler_gscheduler.so 00:04:51.243 SYMLINK libspdk_keyring_linux.so 00:04:51.243 SYMLINK libspdk_keyring_file.so 00:04:51.243 SO libspdk_scheduler_dynamic.so.4.0 00:04:51.243 SO libspdk_accel_ioat.so.6.0 00:04:51.243 SO libspdk_accel_iaa.so.3.0 00:04:51.243 SYMLINK libspdk_accel_error.so 00:04:51.243 SYMLINK libspdk_scheduler_dynamic.so 00:04:51.243 SYMLINK libspdk_accel_ioat.so 00:04:51.243 SYMLINK libspdk_accel_iaa.so 00:04:51.502 LIB libspdk_accel_dsa.a 00:04:51.502 SO libspdk_accel_dsa.so.5.0 00:04:51.502 LIB libspdk_blob_bdev.a 00:04:51.502 SO libspdk_blob_bdev.so.11.0 00:04:51.502 SYMLINK libspdk_accel_dsa.so 00:04:51.502 SYMLINK libspdk_blob_bdev.so 00:04:51.765 LIB libspdk_vfu_device.a 00:04:51.765 LIB libspdk_fsdev_aio.a 00:04:51.765 SO libspdk_vfu_device.so.3.0 00:04:51.765 CC module/blobfs/bdev/blobfs_bdev.o 00:04:51.765 CC module/bdev/delay/vbdev_delay.o 00:04:51.765 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:51.765 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:51.765 CC module/bdev/null/bdev_null.o 00:04:51.765 CC module/bdev/null/bdev_null_rpc.o 00:04:51.765 CC module/bdev/malloc/bdev_malloc.o 00:04:51.765 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:51.765 CC module/bdev/error/vbdev_error.o 00:04:51.765 CC module/bdev/gpt/gpt.o 00:04:51.765 CC module/bdev/gpt/vbdev_gpt.o 00:04:51.765 CC module/bdev/error/vbdev_error_rpc.o 00:04:51.765 CC module/bdev/passthru/vbdev_passthru.o 00:04:51.765 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:51.765 CC module/bdev/split/vbdev_split.o 00:04:51.765 CC module/bdev/raid/bdev_raid.o 00:04:51.765 CC module/bdev/aio/bdev_aio.o 00:04:51.765 CC module/bdev/lvol/vbdev_lvol.o 00:04:51.765 CC module/bdev/raid/bdev_raid_rpc.o 00:04:51.765 CC module/bdev/split/vbdev_split_rpc.o 00:04:51.765 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:51.765 CC module/bdev/raid/bdev_raid_sb.o 00:04:51.765 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:51.765 CC module/bdev/nvme/bdev_nvme.o 00:04:51.765 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:51.765 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:51.765 SO libspdk_fsdev_aio.so.1.0 00:04:51.765 CC module/bdev/aio/bdev_aio_rpc.o 00:04:51.765 CC module/bdev/raid/raid0.o 00:04:51.765 CC module/bdev/nvme/nvme_rpc.o 00:04:51.765 CC module/bdev/raid/raid1.o 00:04:51.765 CC module/bdev/nvme/bdev_mdns_client.o 00:04:51.765 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:51.765 CC module/bdev/ftl/bdev_ftl.o 00:04:51.765 CC module/bdev/raid/concat.o 00:04:51.765 CC module/bdev/nvme/vbdev_opal.o 00:04:51.765 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:51.765 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:51.765 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:51.765 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:51.765 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:51.765 CC module/bdev/iscsi/bdev_iscsi.o 00:04:51.765 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:52.026 SYMLINK libspdk_fsdev_aio.so 00:04:52.026 SYMLINK libspdk_vfu_device.so 00:04:52.026 LIB libspdk_sock_posix.a 00:04:52.026 SO libspdk_sock_posix.so.6.0 00:04:52.284 SYMLINK libspdk_sock_posix.so 00:04:52.284 LIB libspdk_blobfs_bdev.a 00:04:52.284 SO libspdk_blobfs_bdev.so.6.0 00:04:52.284 LIB libspdk_bdev_split.a 00:04:52.284 LIB libspdk_bdev_error.a 00:04:52.284 SO libspdk_bdev_split.so.6.0 00:04:52.284 SYMLINK libspdk_blobfs_bdev.so 00:04:52.284 SO libspdk_bdev_error.so.6.0 00:04:52.284 LIB libspdk_bdev_null.a 00:04:52.284 SO libspdk_bdev_null.so.6.0 00:04:52.284 SYMLINK libspdk_bdev_split.so 00:04:52.284 LIB libspdk_bdev_ftl.a 00:04:52.284 LIB libspdk_bdev_gpt.a 00:04:52.284 SYMLINK libspdk_bdev_error.so 00:04:52.284 SO libspdk_bdev_ftl.so.6.0 00:04:52.284 LIB libspdk_bdev_aio.a 00:04:52.284 SO libspdk_bdev_gpt.so.6.0 00:04:52.284 SYMLINK libspdk_bdev_null.so 00:04:52.285 LIB libspdk_bdev_passthru.a 00:04:52.285 SO libspdk_bdev_aio.so.6.0 00:04:52.285 LIB libspdk_bdev_iscsi.a 00:04:52.285 LIB libspdk_bdev_delay.a 00:04:52.544 SO libspdk_bdev_passthru.so.6.0 00:04:52.544 SO libspdk_bdev_iscsi.so.6.0 00:04:52.544 SO libspdk_bdev_delay.so.6.0 00:04:52.544 SYMLINK libspdk_bdev_ftl.so 00:04:52.544 SYMLINK libspdk_bdev_gpt.so 00:04:52.544 LIB libspdk_bdev_zone_block.a 00:04:52.544 SYMLINK libspdk_bdev_aio.so 00:04:52.544 LIB libspdk_bdev_malloc.a 00:04:52.544 SO libspdk_bdev_zone_block.so.6.0 00:04:52.544 SYMLINK libspdk_bdev_passthru.so 00:04:52.544 SYMLINK libspdk_bdev_iscsi.so 00:04:52.544 SYMLINK libspdk_bdev_delay.so 00:04:52.544 SO libspdk_bdev_malloc.so.6.0 00:04:52.544 SYMLINK libspdk_bdev_zone_block.so 00:04:52.544 SYMLINK libspdk_bdev_malloc.so 00:04:52.544 LIB libspdk_bdev_lvol.a 00:04:52.544 SO libspdk_bdev_lvol.so.6.0 00:04:52.544 LIB libspdk_bdev_virtio.a 00:04:52.544 SYMLINK libspdk_bdev_lvol.so 00:04:52.544 SO libspdk_bdev_virtio.so.6.0 00:04:52.802 SYMLINK libspdk_bdev_virtio.so 00:04:53.061 LIB libspdk_bdev_raid.a 00:04:53.061 SO libspdk_bdev_raid.so.6.0 00:04:53.061 SYMLINK libspdk_bdev_raid.so 00:04:54.441 LIB libspdk_bdev_nvme.a 00:04:54.441 SO libspdk_bdev_nvme.so.7.0 00:04:54.441 SYMLINK libspdk_bdev_nvme.so 00:04:54.700 CC module/event/subsystems/sock/sock.o 00:04:54.701 CC module/event/subsystems/fsdev/fsdev.o 00:04:54.701 CC module/event/subsystems/iobuf/iobuf.o 00:04:54.701 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:54.701 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:54.701 CC module/event/subsystems/scheduler/scheduler.o 00:04:54.701 CC module/event/subsystems/keyring/keyring.o 00:04:54.701 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:54.701 CC module/event/subsystems/vmd/vmd.o 00:04:54.701 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:54.960 LIB libspdk_event_keyring.a 00:04:54.960 LIB libspdk_event_vhost_blk.a 00:04:54.960 LIB libspdk_event_fsdev.a 00:04:54.960 LIB libspdk_event_vfu_tgt.a 00:04:54.960 LIB libspdk_event_vmd.a 00:04:54.960 LIB libspdk_event_scheduler.a 00:04:54.960 LIB libspdk_event_sock.a 00:04:54.960 SO libspdk_event_keyring.so.1.0 00:04:54.960 LIB libspdk_event_iobuf.a 00:04:54.960 SO libspdk_event_vhost_blk.so.3.0 00:04:54.960 SO libspdk_event_fsdev.so.1.0 00:04:54.960 SO libspdk_event_vfu_tgt.so.3.0 00:04:54.960 SO libspdk_event_scheduler.so.4.0 00:04:54.960 SO libspdk_event_sock.so.5.0 00:04:54.960 SO libspdk_event_vmd.so.6.0 00:04:54.960 SO libspdk_event_iobuf.so.3.0 00:04:54.960 SYMLINK libspdk_event_keyring.so 00:04:54.960 SYMLINK libspdk_event_fsdev.so 00:04:54.960 SYMLINK libspdk_event_vhost_blk.so 00:04:54.960 SYMLINK libspdk_event_vfu_tgt.so 00:04:54.960 SYMLINK libspdk_event_scheduler.so 00:04:54.960 SYMLINK libspdk_event_sock.so 00:04:54.960 SYMLINK libspdk_event_vmd.so 00:04:54.960 SYMLINK libspdk_event_iobuf.so 00:04:55.220 CC module/event/subsystems/accel/accel.o 00:04:55.220 LIB libspdk_event_accel.a 00:04:55.479 SO libspdk_event_accel.so.6.0 00:04:55.479 SYMLINK libspdk_event_accel.so 00:04:55.479 CC module/event/subsystems/bdev/bdev.o 00:04:55.738 LIB libspdk_event_bdev.a 00:04:55.738 SO libspdk_event_bdev.so.6.0 00:04:55.738 SYMLINK libspdk_event_bdev.so 00:04:55.997 CC module/event/subsystems/nbd/nbd.o 00:04:55.997 CC module/event/subsystems/scsi/scsi.o 00:04:55.997 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:55.997 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:55.997 CC module/event/subsystems/ublk/ublk.o 00:04:56.256 LIB libspdk_event_ublk.a 00:04:56.256 LIB libspdk_event_nbd.a 00:04:56.256 LIB libspdk_event_scsi.a 00:04:56.256 SO libspdk_event_nbd.so.6.0 00:04:56.256 SO libspdk_event_ublk.so.3.0 00:04:56.256 SO libspdk_event_scsi.so.6.0 00:04:56.256 SYMLINK libspdk_event_nbd.so 00:04:56.256 SYMLINK libspdk_event_ublk.so 00:04:56.256 SYMLINK libspdk_event_scsi.so 00:04:56.256 LIB libspdk_event_nvmf.a 00:04:56.256 SO libspdk_event_nvmf.so.6.0 00:04:56.256 SYMLINK libspdk_event_nvmf.so 00:04:56.515 CC module/event/subsystems/iscsi/iscsi.o 00:04:56.515 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:56.515 LIB libspdk_event_vhost_scsi.a 00:04:56.515 SO libspdk_event_vhost_scsi.so.3.0 00:04:56.515 LIB libspdk_event_iscsi.a 00:04:56.515 SO libspdk_event_iscsi.so.6.0 00:04:56.515 SYMLINK libspdk_event_vhost_scsi.so 00:04:56.775 SYMLINK libspdk_event_iscsi.so 00:04:56.775 SO libspdk.so.6.0 00:04:56.775 SYMLINK libspdk.so 00:04:57.039 CC test/rpc_client/rpc_client_test.o 00:04:57.039 CC app/trace_record/trace_record.o 00:04:57.039 CXX app/trace/trace.o 00:04:57.039 CC app/spdk_lspci/spdk_lspci.o 00:04:57.039 TEST_HEADER include/spdk/accel.h 00:04:57.039 TEST_HEADER include/spdk/accel_module.h 00:04:57.039 CC app/spdk_nvme_perf/perf.o 00:04:57.039 TEST_HEADER include/spdk/assert.h 00:04:57.039 CC app/spdk_nvme_discover/discovery_aer.o 00:04:57.039 TEST_HEADER include/spdk/barrier.h 00:04:57.039 TEST_HEADER include/spdk/base64.h 00:04:57.039 CC app/spdk_nvme_identify/identify.o 00:04:57.039 TEST_HEADER include/spdk/bdev.h 00:04:57.039 CC app/spdk_top/spdk_top.o 00:04:57.039 TEST_HEADER include/spdk/bdev_module.h 00:04:57.039 TEST_HEADER include/spdk/bdev_zone.h 00:04:57.039 TEST_HEADER include/spdk/bit_array.h 00:04:57.039 TEST_HEADER include/spdk/bit_pool.h 00:04:57.039 TEST_HEADER include/spdk/blob_bdev.h 00:04:57.039 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:57.039 TEST_HEADER include/spdk/blob.h 00:04:57.039 TEST_HEADER include/spdk/blobfs.h 00:04:57.039 TEST_HEADER include/spdk/conf.h 00:04:57.039 TEST_HEADER include/spdk/config.h 00:04:57.039 TEST_HEADER include/spdk/cpuset.h 00:04:57.039 TEST_HEADER include/spdk/crc16.h 00:04:57.039 TEST_HEADER include/spdk/crc32.h 00:04:57.039 TEST_HEADER include/spdk/crc64.h 00:04:57.039 TEST_HEADER include/spdk/dif.h 00:04:57.039 TEST_HEADER include/spdk/dma.h 00:04:57.039 TEST_HEADER include/spdk/endian.h 00:04:57.039 TEST_HEADER include/spdk/env.h 00:04:57.039 TEST_HEADER include/spdk/env_dpdk.h 00:04:57.039 TEST_HEADER include/spdk/event.h 00:04:57.039 TEST_HEADER include/spdk/fd_group.h 00:04:57.039 TEST_HEADER include/spdk/fd.h 00:04:57.039 TEST_HEADER include/spdk/file.h 00:04:57.039 TEST_HEADER include/spdk/fsdev.h 00:04:57.039 TEST_HEADER include/spdk/fsdev_module.h 00:04:57.039 TEST_HEADER include/spdk/ftl.h 00:04:57.039 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:57.039 TEST_HEADER include/spdk/gpt_spec.h 00:04:57.039 TEST_HEADER include/spdk/hexlify.h 00:04:57.039 TEST_HEADER include/spdk/histogram_data.h 00:04:57.039 TEST_HEADER include/spdk/idxd.h 00:04:57.039 TEST_HEADER include/spdk/idxd_spec.h 00:04:57.039 TEST_HEADER include/spdk/init.h 00:04:57.039 TEST_HEADER include/spdk/ioat.h 00:04:57.039 TEST_HEADER include/spdk/ioat_spec.h 00:04:57.039 TEST_HEADER include/spdk/iscsi_spec.h 00:04:57.039 TEST_HEADER include/spdk/json.h 00:04:57.039 TEST_HEADER include/spdk/jsonrpc.h 00:04:57.039 TEST_HEADER include/spdk/keyring_module.h 00:04:57.039 TEST_HEADER include/spdk/keyring.h 00:04:57.039 TEST_HEADER include/spdk/likely.h 00:04:57.040 TEST_HEADER include/spdk/log.h 00:04:57.040 TEST_HEADER include/spdk/lvol.h 00:04:57.040 TEST_HEADER include/spdk/md5.h 00:04:57.040 TEST_HEADER include/spdk/memory.h 00:04:57.040 TEST_HEADER include/spdk/mmio.h 00:04:57.040 TEST_HEADER include/spdk/nbd.h 00:04:57.040 TEST_HEADER include/spdk/net.h 00:04:57.040 TEST_HEADER include/spdk/notify.h 00:04:57.040 TEST_HEADER include/spdk/nvme.h 00:04:57.040 TEST_HEADER include/spdk/nvme_intel.h 00:04:57.040 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:57.040 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:57.040 TEST_HEADER include/spdk/nvme_spec.h 00:04:57.040 TEST_HEADER include/spdk/nvme_zns.h 00:04:57.040 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:57.040 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:57.040 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:57.040 TEST_HEADER include/spdk/nvmf.h 00:04:57.040 TEST_HEADER include/spdk/nvmf_spec.h 00:04:57.040 TEST_HEADER include/spdk/nvmf_transport.h 00:04:57.040 TEST_HEADER include/spdk/opal.h 00:04:57.040 TEST_HEADER include/spdk/opal_spec.h 00:04:57.040 TEST_HEADER include/spdk/pci_ids.h 00:04:57.040 TEST_HEADER include/spdk/pipe.h 00:04:57.040 TEST_HEADER include/spdk/queue.h 00:04:57.040 TEST_HEADER include/spdk/reduce.h 00:04:57.040 TEST_HEADER include/spdk/rpc.h 00:04:57.040 TEST_HEADER include/spdk/scheduler.h 00:04:57.040 TEST_HEADER include/spdk/scsi.h 00:04:57.040 TEST_HEADER include/spdk/scsi_spec.h 00:04:57.040 TEST_HEADER include/spdk/sock.h 00:04:57.040 TEST_HEADER include/spdk/string.h 00:04:57.040 TEST_HEADER include/spdk/stdinc.h 00:04:57.040 TEST_HEADER include/spdk/thread.h 00:04:57.040 TEST_HEADER include/spdk/trace.h 00:04:57.040 TEST_HEADER include/spdk/tree.h 00:04:57.040 TEST_HEADER include/spdk/trace_parser.h 00:04:57.040 TEST_HEADER include/spdk/ublk.h 00:04:57.040 TEST_HEADER include/spdk/uuid.h 00:04:57.040 TEST_HEADER include/spdk/util.h 00:04:57.040 TEST_HEADER include/spdk/version.h 00:04:57.040 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:57.040 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:57.040 TEST_HEADER include/spdk/vhost.h 00:04:57.040 CC app/spdk_dd/spdk_dd.o 00:04:57.040 TEST_HEADER include/spdk/xor.h 00:04:57.040 TEST_HEADER include/spdk/vmd.h 00:04:57.040 TEST_HEADER include/spdk/zipf.h 00:04:57.040 CXX test/cpp_headers/accel.o 00:04:57.040 CXX test/cpp_headers/assert.o 00:04:57.040 CXX test/cpp_headers/accel_module.o 00:04:57.040 CXX test/cpp_headers/barrier.o 00:04:57.040 CXX test/cpp_headers/base64.o 00:04:57.040 CXX test/cpp_headers/bdev.o 00:04:57.040 CXX test/cpp_headers/bdev_module.o 00:04:57.040 CXX test/cpp_headers/bdev_zone.o 00:04:57.040 CXX test/cpp_headers/bit_array.o 00:04:57.040 CXX test/cpp_headers/bit_pool.o 00:04:57.040 CXX test/cpp_headers/blob_bdev.o 00:04:57.040 CXX test/cpp_headers/blobfs_bdev.o 00:04:57.040 CXX test/cpp_headers/blobfs.o 00:04:57.040 CXX test/cpp_headers/blob.o 00:04:57.040 CXX test/cpp_headers/conf.o 00:04:57.040 CXX test/cpp_headers/config.o 00:04:57.040 CXX test/cpp_headers/cpuset.o 00:04:57.040 CXX test/cpp_headers/crc16.o 00:04:57.040 CC app/iscsi_tgt/iscsi_tgt.o 00:04:57.040 CC app/nvmf_tgt/nvmf_main.o 00:04:57.040 CXX test/cpp_headers/crc32.o 00:04:57.040 CC app/spdk_tgt/spdk_tgt.o 00:04:57.040 CC test/thread/poller_perf/poller_perf.o 00:04:57.040 CC test/app/jsoncat/jsoncat.o 00:04:57.040 CC test/env/vtophys/vtophys.o 00:04:57.040 CC examples/util/zipf/zipf.o 00:04:57.040 CC test/app/stub/stub.o 00:04:57.040 CC test/env/pci/pci_ut.o 00:04:57.040 CC test/app/histogram_perf/histogram_perf.o 00:04:57.040 CC examples/ioat/verify/verify.o 00:04:57.040 CC examples/ioat/perf/perf.o 00:04:57.040 CC test/env/memory/memory_ut.o 00:04:57.040 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:57.040 CC app/fio/nvme/fio_plugin.o 00:04:57.305 CC test/dma/test_dma/test_dma.o 00:04:57.305 CC test/app/bdev_svc/bdev_svc.o 00:04:57.305 CC app/fio/bdev/fio_plugin.o 00:04:57.305 LINK spdk_lspci 00:04:57.305 CC test/env/mem_callbacks/mem_callbacks.o 00:04:57.305 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:57.305 LINK rpc_client_test 00:04:57.305 LINK interrupt_tgt 00:04:57.305 LINK spdk_nvme_discover 00:04:57.572 LINK poller_perf 00:04:57.572 LINK jsoncat 00:04:57.572 LINK zipf 00:04:57.572 LINK vtophys 00:04:57.572 CXX test/cpp_headers/crc64.o 00:04:57.572 CXX test/cpp_headers/dif.o 00:04:57.572 LINK histogram_perf 00:04:57.572 CXX test/cpp_headers/dma.o 00:04:57.572 CXX test/cpp_headers/endian.o 00:04:57.572 LINK env_dpdk_post_init 00:04:57.572 CXX test/cpp_headers/env_dpdk.o 00:04:57.572 CXX test/cpp_headers/env.o 00:04:57.572 LINK spdk_trace_record 00:04:57.572 CXX test/cpp_headers/event.o 00:04:57.572 CXX test/cpp_headers/fd_group.o 00:04:57.572 CXX test/cpp_headers/fd.o 00:04:57.572 LINK iscsi_tgt 00:04:57.572 CXX test/cpp_headers/file.o 00:04:57.572 CXX test/cpp_headers/fsdev.o 00:04:57.572 LINK nvmf_tgt 00:04:57.572 CXX test/cpp_headers/fsdev_module.o 00:04:57.572 LINK stub 00:04:57.572 CXX test/cpp_headers/ftl.o 00:04:57.572 CXX test/cpp_headers/fuse_dispatcher.o 00:04:57.572 CXX test/cpp_headers/gpt_spec.o 00:04:57.572 LINK bdev_svc 00:04:57.572 LINK spdk_tgt 00:04:57.572 LINK ioat_perf 00:04:57.572 LINK verify 00:04:57.572 CXX test/cpp_headers/hexlify.o 00:04:57.572 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:57.572 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:57.572 CXX test/cpp_headers/histogram_data.o 00:04:57.572 CXX test/cpp_headers/idxd.o 00:04:57.832 CXX test/cpp_headers/idxd_spec.o 00:04:57.832 CXX test/cpp_headers/init.o 00:04:57.832 CXX test/cpp_headers/ioat.o 00:04:57.832 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:57.832 CXX test/cpp_headers/ioat_spec.o 00:04:57.832 CXX test/cpp_headers/iscsi_spec.o 00:04:57.832 LINK spdk_dd 00:04:57.832 CXX test/cpp_headers/json.o 00:04:57.832 CXX test/cpp_headers/jsonrpc.o 00:04:57.832 LINK pci_ut 00:04:57.832 LINK spdk_trace 00:04:57.832 CXX test/cpp_headers/keyring.o 00:04:57.832 CXX test/cpp_headers/keyring_module.o 00:04:57.832 CXX test/cpp_headers/likely.o 00:04:57.832 CXX test/cpp_headers/log.o 00:04:57.832 CXX test/cpp_headers/lvol.o 00:04:57.832 CXX test/cpp_headers/md5.o 00:04:57.832 CXX test/cpp_headers/mmio.o 00:04:57.832 CXX test/cpp_headers/memory.o 00:04:57.832 CXX test/cpp_headers/nbd.o 00:04:58.103 CXX test/cpp_headers/net.o 00:04:58.103 CXX test/cpp_headers/notify.o 00:04:58.103 CXX test/cpp_headers/nvme.o 00:04:58.103 CXX test/cpp_headers/nvme_intel.o 00:04:58.103 CXX test/cpp_headers/nvme_ocssd.o 00:04:58.103 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:58.103 CXX test/cpp_headers/nvme_spec.o 00:04:58.103 CXX test/cpp_headers/nvme_zns.o 00:04:58.103 CXX test/cpp_headers/nvmf_cmd.o 00:04:58.103 CC test/event/reactor/reactor.o 00:04:58.103 CC test/event/event_perf/event_perf.o 00:04:58.103 CC test/event/reactor_perf/reactor_perf.o 00:04:58.103 CC test/event/app_repeat/app_repeat.o 00:04:58.103 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:58.103 CXX test/cpp_headers/nvmf.o 00:04:58.103 CXX test/cpp_headers/nvmf_spec.o 00:04:58.103 CXX test/cpp_headers/nvmf_transport.o 00:04:58.103 CC test/event/scheduler/scheduler.o 00:04:58.103 LINK nvme_fuzz 00:04:58.103 CC examples/sock/hello_world/hello_sock.o 00:04:58.103 CC examples/thread/thread/thread_ex.o 00:04:58.103 CXX test/cpp_headers/opal.o 00:04:58.103 CXX test/cpp_headers/opal_spec.o 00:04:58.103 CC examples/idxd/perf/perf.o 00:04:58.103 CC examples/vmd/lsvmd/lsvmd.o 00:04:58.366 CXX test/cpp_headers/pci_ids.o 00:04:58.366 LINK test_dma 00:04:58.366 CC examples/vmd/led/led.o 00:04:58.366 CXX test/cpp_headers/pipe.o 00:04:58.366 CXX test/cpp_headers/queue.o 00:04:58.366 CXX test/cpp_headers/reduce.o 00:04:58.366 CXX test/cpp_headers/rpc.o 00:04:58.366 CXX test/cpp_headers/scheduler.o 00:04:58.366 CXX test/cpp_headers/scsi.o 00:04:58.366 CXX test/cpp_headers/scsi_spec.o 00:04:58.366 CXX test/cpp_headers/sock.o 00:04:58.366 CXX test/cpp_headers/stdinc.o 00:04:58.366 CXX test/cpp_headers/string.o 00:04:58.366 CXX test/cpp_headers/thread.o 00:04:58.366 CXX test/cpp_headers/trace.o 00:04:58.366 LINK reactor 00:04:58.366 CXX test/cpp_headers/trace_parser.o 00:04:58.366 LINK spdk_bdev 00:04:58.366 LINK event_perf 00:04:58.366 LINK mem_callbacks 00:04:58.366 LINK reactor_perf 00:04:58.366 CXX test/cpp_headers/tree.o 00:04:58.366 CXX test/cpp_headers/ublk.o 00:04:58.366 CXX test/cpp_headers/util.o 00:04:58.366 LINK app_repeat 00:04:58.366 CXX test/cpp_headers/uuid.o 00:04:58.366 LINK spdk_nvme 00:04:58.366 CXX test/cpp_headers/version.o 00:04:58.626 CXX test/cpp_headers/vfio_user_spec.o 00:04:58.626 CXX test/cpp_headers/vfio_user_pci.o 00:04:58.626 CXX test/cpp_headers/vhost.o 00:04:58.626 CXX test/cpp_headers/vmd.o 00:04:58.626 LINK lsvmd 00:04:58.626 CXX test/cpp_headers/xor.o 00:04:58.626 CXX test/cpp_headers/zipf.o 00:04:58.626 CC app/vhost/vhost.o 00:04:58.626 LINK spdk_nvme_perf 00:04:58.626 LINK led 00:04:58.626 LINK vhost_fuzz 00:04:58.626 LINK spdk_nvme_identify 00:04:58.626 LINK scheduler 00:04:58.626 LINK hello_sock 00:04:58.626 LINK spdk_top 00:04:58.626 LINK thread 00:04:58.886 LINK idxd_perf 00:04:58.886 CC test/nvme/connect_stress/connect_stress.o 00:04:58.886 CC test/nvme/aer/aer.o 00:04:58.886 CC test/nvme/fdp/fdp.o 00:04:58.886 CC test/nvme/simple_copy/simple_copy.o 00:04:58.886 CC test/nvme/err_injection/err_injection.o 00:04:58.886 CC test/nvme/cuse/cuse.o 00:04:58.886 CC test/nvme/compliance/nvme_compliance.o 00:04:58.886 CC test/nvme/sgl/sgl.o 00:04:58.886 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:58.886 CC test/nvme/reset/reset.o 00:04:58.886 CC test/nvme/boot_partition/boot_partition.o 00:04:58.886 CC test/nvme/e2edp/nvme_dp.o 00:04:58.886 LINK vhost 00:04:58.886 CC test/nvme/fused_ordering/fused_ordering.o 00:04:58.886 CC test/nvme/overhead/overhead.o 00:04:58.886 CC test/nvme/startup/startup.o 00:04:58.886 CC test/nvme/reserve/reserve.o 00:04:58.886 CC test/accel/dif/dif.o 00:04:58.886 CC test/blobfs/mkfs/mkfs.o 00:04:58.886 CC test/lvol/esnap/esnap.o 00:04:59.146 CC examples/nvme/reconnect/reconnect.o 00:04:59.146 CC examples/nvme/hotplug/hotplug.o 00:04:59.146 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:59.146 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:59.146 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:59.146 CC examples/nvme/arbitration/arbitration.o 00:04:59.146 CC examples/nvme/hello_world/hello_world.o 00:04:59.146 CC examples/nvme/abort/abort.o 00:04:59.146 LINK boot_partition 00:04:59.146 LINK connect_stress 00:04:59.146 LINK fused_ordering 00:04:59.146 LINK startup 00:04:59.146 LINK reserve 00:04:59.146 CC examples/accel/perf/accel_perf.o 00:04:59.146 LINK memory_ut 00:04:59.146 CC examples/blob/hello_world/hello_blob.o 00:04:59.146 CC examples/blob/cli/blobcli.o 00:04:59.146 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:59.146 LINK simple_copy 00:04:59.146 LINK err_injection 00:04:59.406 LINK overhead 00:04:59.406 LINK doorbell_aers 00:04:59.406 LINK mkfs 00:04:59.406 LINK nvme_dp 00:04:59.406 LINK nvme_compliance 00:04:59.406 LINK reset 00:04:59.406 LINK sgl 00:04:59.406 LINK pmr_persistence 00:04:59.406 LINK aer 00:04:59.406 LINK cmb_copy 00:04:59.406 LINK fdp 00:04:59.664 LINK hello_world 00:04:59.664 LINK hotplug 00:04:59.664 LINK abort 00:04:59.664 LINK hello_blob 00:04:59.664 LINK reconnect 00:04:59.664 LINK hello_fsdev 00:04:59.664 LINK arbitration 00:04:59.664 LINK dif 00:04:59.923 LINK nvme_manage 00:04:59.923 LINK accel_perf 00:04:59.923 LINK blobcli 00:05:00.182 LINK iscsi_fuzz 00:05:00.182 CC test/bdev/bdevio/bdevio.o 00:05:00.182 CC examples/bdev/hello_world/hello_bdev.o 00:05:00.182 CC examples/bdev/bdevperf/bdevperf.o 00:05:00.441 LINK hello_bdev 00:05:00.441 LINK bdevio 00:05:00.700 LINK cuse 00:05:00.959 LINK bdevperf 00:05:01.526 CC examples/nvmf/nvmf/nvmf.o 00:05:01.787 LINK nvmf 00:05:04.330 LINK esnap 00:05:04.330 00:05:04.330 real 1m7.406s 00:05:04.330 user 9m3.105s 00:05:04.330 sys 1m58.211s 00:05:04.330 22:28:07 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:04.330 22:28:07 make -- common/autotest_common.sh@10 -- $ set +x 00:05:04.330 ************************************ 00:05:04.330 END TEST make 00:05:04.330 ************************************ 00:05:04.330 22:28:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:04.330 22:28:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:04.330 22:28:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:04.330 22:28:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.330 22:28:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:04.330 22:28:07 -- pm/common@44 -- $ pid=6126 00:05:04.330 22:28:07 -- pm/common@50 -- $ kill -TERM 6126 00:05:04.330 22:28:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.330 22:28:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:04.330 22:28:07 -- pm/common@44 -- $ pid=6128 00:05:04.330 22:28:07 -- pm/common@50 -- $ kill -TERM 6128 00:05:04.330 22:28:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.330 22:28:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:04.330 22:28:07 -- pm/common@44 -- $ pid=6130 00:05:04.330 22:28:07 -- pm/common@50 -- $ kill -TERM 6130 00:05:04.330 22:28:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.330 22:28:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:04.330 22:28:07 -- pm/common@44 -- $ pid=6159 00:05:04.330 22:28:07 -- pm/common@50 -- $ sudo -E kill -TERM 6159 00:05:04.589 22:28:07 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:04.589 22:28:07 -- common/autotest_common.sh@1691 -- # lcov --version 00:05:04.589 22:28:07 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:04.589 22:28:07 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:04.589 22:28:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.589 22:28:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.589 22:28:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.589 22:28:07 -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.589 22:28:07 -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.589 22:28:07 -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.589 22:28:07 -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.589 22:28:07 -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.589 22:28:07 -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.589 22:28:07 -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.589 22:28:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.589 22:28:07 -- scripts/common.sh@344 -- # case "$op" in 00:05:04.589 22:28:07 -- scripts/common.sh@345 -- # : 1 00:05:04.589 22:28:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.589 22:28:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.589 22:28:07 -- scripts/common.sh@365 -- # decimal 1 00:05:04.589 22:28:07 -- scripts/common.sh@353 -- # local d=1 00:05:04.589 22:28:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.589 22:28:07 -- scripts/common.sh@355 -- # echo 1 00:05:04.589 22:28:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.589 22:28:07 -- scripts/common.sh@366 -- # decimal 2 00:05:04.589 22:28:07 -- scripts/common.sh@353 -- # local d=2 00:05:04.589 22:28:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.589 22:28:07 -- scripts/common.sh@355 -- # echo 2 00:05:04.589 22:28:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.589 22:28:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.589 22:28:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.589 22:28:07 -- scripts/common.sh@368 -- # return 0 00:05:04.589 22:28:07 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.589 22:28:07 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:04.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.589 --rc genhtml_branch_coverage=1 00:05:04.589 --rc genhtml_function_coverage=1 00:05:04.589 --rc genhtml_legend=1 00:05:04.589 --rc geninfo_all_blocks=1 00:05:04.589 --rc geninfo_unexecuted_blocks=1 00:05:04.589 00:05:04.589 ' 00:05:04.589 22:28:07 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:04.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.589 --rc genhtml_branch_coverage=1 00:05:04.589 --rc genhtml_function_coverage=1 00:05:04.589 --rc genhtml_legend=1 00:05:04.589 --rc geninfo_all_blocks=1 00:05:04.589 --rc geninfo_unexecuted_blocks=1 00:05:04.589 00:05:04.589 ' 00:05:04.589 22:28:07 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:04.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.589 --rc genhtml_branch_coverage=1 00:05:04.589 --rc genhtml_function_coverage=1 00:05:04.589 --rc genhtml_legend=1 00:05:04.589 --rc geninfo_all_blocks=1 00:05:04.589 --rc geninfo_unexecuted_blocks=1 00:05:04.589 00:05:04.589 ' 00:05:04.589 22:28:07 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:04.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.589 --rc genhtml_branch_coverage=1 00:05:04.589 --rc genhtml_function_coverage=1 00:05:04.589 --rc genhtml_legend=1 00:05:04.589 --rc geninfo_all_blocks=1 00:05:04.589 --rc geninfo_unexecuted_blocks=1 00:05:04.589 00:05:04.589 ' 00:05:04.589 22:28:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:04.589 22:28:07 -- nvmf/common.sh@7 -- # uname -s 00:05:04.589 22:28:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.589 22:28:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.589 22:28:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.589 22:28:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.589 22:28:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.589 22:28:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.589 22:28:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.589 22:28:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.589 22:28:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.589 22:28:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.589 22:28:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:04.589 22:28:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:04.589 22:28:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.589 22:28:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.589 22:28:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:04.589 22:28:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.589 22:28:07 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:04.589 22:28:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.589 22:28:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.589 22:28:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.589 22:28:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.589 22:28:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.589 22:28:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.590 22:28:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.590 22:28:07 -- paths/export.sh@5 -- # export PATH 00:05:04.590 22:28:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.590 22:28:07 -- nvmf/common.sh@51 -- # : 0 00:05:04.590 22:28:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.590 22:28:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.590 22:28:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.590 22:28:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.590 22:28:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.590 22:28:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.590 22:28:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.590 22:28:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.590 22:28:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.590 22:28:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:04.590 22:28:07 -- spdk/autotest.sh@32 -- # uname -s 00:05:04.590 22:28:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:04.590 22:28:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:04.590 22:28:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:04.590 22:28:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:04.590 22:28:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:04.590 22:28:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:04.590 22:28:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:04.590 22:28:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:04.590 22:28:07 -- spdk/autotest.sh@48 -- # udevadm_pid=87602 00:05:04.590 22:28:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:04.590 22:28:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:04.590 22:28:07 -- pm/common@17 -- # local monitor 00:05:04.590 22:28:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.590 22:28:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.590 22:28:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.590 22:28:07 -- pm/common@21 -- # date +%s 00:05:04.590 22:28:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.590 22:28:07 -- pm/common@21 -- # date +%s 00:05:04.590 22:28:07 -- pm/common@25 -- # sleep 1 00:05:04.590 22:28:07 -- pm/common@21 -- # date +%s 00:05:04.590 22:28:07 -- pm/common@21 -- # date +%s 00:05:04.590 22:28:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728678487 00:05:04.590 22:28:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728678487 00:05:04.590 22:28:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728678487 00:05:04.590 22:28:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728678487 00:05:04.849 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728678487_collect-vmstat.pm.log 00:05:04.849 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728678487_collect-cpu-load.pm.log 00:05:04.849 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728678487_collect-cpu-temp.pm.log 00:05:04.849 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728678487_collect-bmc-pm.bmc.pm.log 00:05:05.790 22:28:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:05.790 22:28:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:05.790 22:28:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.790 22:28:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.790 22:28:08 -- spdk/autotest.sh@59 -- # create_test_list 00:05:05.790 22:28:08 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:05.790 22:28:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.790 22:28:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:05.790 22:28:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.790 22:28:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.790 22:28:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:05.790 22:28:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.790 22:28:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:05.790 22:28:08 -- common/autotest_common.sh@1455 -- # uname 00:05:05.790 22:28:08 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:05.790 22:28:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:05.790 22:28:08 -- common/autotest_common.sh@1475 -- # uname 00:05:05.790 22:28:08 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:05.790 22:28:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:05.790 22:28:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:05.790 lcov: LCOV version 1.15 00:05:05.790 22:28:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:23.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:23.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:45.868 22:28:45 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:45.868 22:28:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:45.868 22:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:45.868 22:28:45 -- spdk/autotest.sh@78 -- # rm -f 00:05:45.868 22:28:45 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:45.868 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:45.868 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:45.868 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:45.868 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:45.868 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:45.868 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:45.868 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:45.868 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:45.868 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:45.868 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:45.868 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:45.868 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:45.868 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:45.868 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:45.868 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:45.868 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:45.868 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:45.868 22:28:47 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:45.868 22:28:47 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:45.868 22:28:47 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:45.868 22:28:47 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:45.868 22:28:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:45.868 22:28:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:45.868 22:28:47 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:45.868 22:28:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:45.868 22:28:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:45.868 22:28:47 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:45.868 22:28:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:45.868 22:28:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:45.868 22:28:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:45.868 22:28:47 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:45.868 22:28:47 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:45.868 No valid GPT data, bailing 00:05:45.868 22:28:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:45.868 22:28:47 -- scripts/common.sh@394 -- # pt= 00:05:45.868 22:28:47 -- scripts/common.sh@395 -- # return 1 00:05:45.868 22:28:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:45.868 1+0 records in 00:05:45.868 1+0 records out 00:05:45.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00196906 s, 533 MB/s 00:05:45.868 22:28:47 -- spdk/autotest.sh@105 -- # sync 00:05:45.868 22:28:47 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:45.868 22:28:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:45.868 22:28:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:46.127 22:28:49 -- spdk/autotest.sh@111 -- # uname -s 00:05:46.127 22:28:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:46.127 22:28:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:46.127 22:28:49 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:47.511 Hugepages 00:05:47.511 node hugesize free / total 00:05:47.511 node0 1048576kB 0 / 0 00:05:47.511 node0 2048kB 0 / 0 00:05:47.511 node1 1048576kB 0 / 0 00:05:47.511 node1 2048kB 0 / 0 00:05:47.511 00:05:47.511 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:47.511 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:47.511 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:47.511 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:47.511 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:47.511 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:47.511 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:47.511 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:47.511 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:47.511 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:47.511 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:47.511 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:47.511 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:47.511 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:47.511 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:47.511 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:47.511 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:47.511 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:47.511 22:28:50 -- spdk/autotest.sh@117 -- # uname -s 00:05:47.511 22:28:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:47.511 22:28:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:47.511 22:28:50 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:48.891 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:48.891 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:48.891 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:48.891 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:48.891 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:48.891 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:48.891 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:48.891 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:48.891 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:48.891 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:48.891 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:48.891 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:48.891 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:48.891 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:48.891 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:48.891 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:49.834 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:49.834 22:28:53 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:51.213 22:28:54 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:51.213 22:28:54 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:51.213 22:28:54 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:51.213 22:28:54 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:51.213 22:28:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:51.213 22:28:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:51.213 22:28:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:51.213 22:28:54 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:51.213 22:28:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:51.213 22:28:54 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:51.213 22:28:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:05:51.213 22:28:54 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:52.150 Waiting for block devices as requested 00:05:52.150 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:52.410 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:52.410 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:52.410 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:52.670 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:52.670 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:52.670 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:52.670 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:52.930 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:52.930 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:52.930 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:53.191 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:53.191 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:53.191 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:53.191 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:53.450 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:53.450 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:53.450 22:28:56 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:53.450 22:28:56 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:53.450 22:28:56 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:53.450 22:28:56 -- common/autotest_common.sh@1485 -- # grep 0000:88:00.0/nvme/nvme 00:05:53.450 22:28:56 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:53.450 22:28:56 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:53.450 22:28:56 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:53.450 22:28:56 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:53.450 22:28:56 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:53.450 22:28:56 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:53.450 22:28:56 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:53.450 22:28:56 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:53.709 22:28:56 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:53.709 22:28:56 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:05:53.709 22:28:56 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:53.709 22:28:56 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:53.709 22:28:56 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:53.709 22:28:56 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:53.709 22:28:56 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:53.709 22:28:56 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:53.709 22:28:56 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:53.709 22:28:56 -- common/autotest_common.sh@1541 -- # continue 00:05:53.709 22:28:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:53.709 22:28:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:53.709 22:28:56 -- common/autotest_common.sh@10 -- # set +x 00:05:53.709 22:28:56 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:53.709 22:28:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.709 22:28:56 -- common/autotest_common.sh@10 -- # set +x 00:05:53.709 22:28:56 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:55.087 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:55.087 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:55.087 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:55.087 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:55.087 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:55.087 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:55.087 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:55.087 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:55.087 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:55.087 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:55.087 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:55.087 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:55.087 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:55.087 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:55.087 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:55.087 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:56.026 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:56.026 22:28:59 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:56.026 22:28:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:56.026 22:28:59 -- common/autotest_common.sh@10 -- # set +x 00:05:56.026 22:28:59 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:56.026 22:28:59 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:56.026 22:28:59 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:56.026 22:28:59 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:56.026 22:28:59 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:56.026 22:28:59 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:56.026 22:28:59 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:56.026 22:28:59 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:56.026 22:28:59 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:56.026 22:28:59 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:56.026 22:28:59 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:56.027 22:28:59 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:56.027 22:28:59 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:56.027 22:28:59 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:56.027 22:28:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:05:56.027 22:28:59 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:56.027 22:28:59 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:56.027 22:28:59 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:56.027 22:28:59 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:56.027 22:28:59 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:56.027 22:28:59 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:56.027 22:28:59 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:56.027 22:28:59 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:56.027 22:28:59 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=98220 00:05:56.027 22:28:59 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.027 22:28:59 -- common/autotest_common.sh@1583 -- # waitforlisten 98220 00:05:56.027 22:28:59 -- common/autotest_common.sh@831 -- # '[' -z 98220 ']' 00:05:56.027 22:28:59 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.027 22:28:59 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.027 22:28:59 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.027 22:28:59 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.027 22:28:59 -- common/autotest_common.sh@10 -- # set +x 00:05:56.285 [2024-10-11 22:28:59.344714] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:05:56.285 [2024-10-11 22:28:59.344797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98220 ] 00:05:56.285 [2024-10-11 22:28:59.405714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.285 [2024-10-11 22:28:59.453273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.543 22:28:59 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.543 22:28:59 -- common/autotest_common.sh@864 -- # return 0 00:05:56.543 22:28:59 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:56.543 22:28:59 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:56.543 22:28:59 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:59.835 nvme0n1 00:05:59.835 22:29:02 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:59.835 [2024-10-11 22:29:03.075571] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:59.835 [2024-10-11 22:29:03.075613] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:59.835 request: 00:05:59.835 { 00:05:59.835 "nvme_ctrlr_name": "nvme0", 00:05:59.835 "password": "test", 00:05:59.835 "method": "bdev_nvme_opal_revert", 00:05:59.835 "req_id": 1 00:05:59.835 } 00:05:59.835 Got JSON-RPC error response 00:05:59.835 response: 00:05:59.835 { 00:05:59.835 "code": -32603, 00:05:59.835 "message": "Internal error" 00:05:59.835 } 00:05:59.835 22:29:03 -- common/autotest_common.sh@1589 -- # true 00:05:59.835 22:29:03 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:59.835 22:29:03 -- common/autotest_common.sh@1593 -- # killprocess 98220 00:05:59.835 22:29:03 -- common/autotest_common.sh@950 -- # '[' -z 98220 ']' 00:05:59.835 22:29:03 -- common/autotest_common.sh@954 -- # kill -0 98220 00:05:59.835 22:29:03 -- common/autotest_common.sh@955 -- # uname 00:05:59.835 22:29:03 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.835 22:29:03 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98220 00:06:00.094 22:29:03 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.094 22:29:03 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.094 22:29:03 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98220' 00:06:00.094 killing process with pid 98220 00:06:00.094 22:29:03 -- common/autotest_common.sh@969 -- # kill 98220 00:06:00.094 22:29:03 -- common/autotest_common.sh@974 -- # wait 98220 00:06:01.998 22:29:04 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:01.998 22:29:04 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:01.998 22:29:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:01.998 22:29:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:01.998 22:29:04 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:01.998 22:29:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:01.998 22:29:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.998 22:29:04 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:01.998 22:29:04 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:01.998 22:29:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.998 22:29:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.998 22:29:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.998 ************************************ 00:06:01.998 START TEST env 00:06:01.998 ************************************ 00:06:01.998 22:29:04 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:01.998 * Looking for test storage... 00:06:01.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:01.998 22:29:04 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:01.998 22:29:04 env -- common/autotest_common.sh@1691 -- # lcov --version 00:06:01.998 22:29:04 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:01.998 22:29:05 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:01.998 22:29:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.998 22:29:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.998 22:29:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.998 22:29:05 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.998 22:29:05 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.998 22:29:05 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.998 22:29:05 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.998 22:29:05 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.998 22:29:05 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.998 22:29:05 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.998 22:29:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.998 22:29:05 env -- scripts/common.sh@344 -- # case "$op" in 00:06:01.998 22:29:05 env -- scripts/common.sh@345 -- # : 1 00:06:01.998 22:29:05 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.998 22:29:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.999 22:29:05 env -- scripts/common.sh@365 -- # decimal 1 00:06:01.999 22:29:05 env -- scripts/common.sh@353 -- # local d=1 00:06:01.999 22:29:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.999 22:29:05 env -- scripts/common.sh@355 -- # echo 1 00:06:01.999 22:29:05 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.999 22:29:05 env -- scripts/common.sh@366 -- # decimal 2 00:06:01.999 22:29:05 env -- scripts/common.sh@353 -- # local d=2 00:06:01.999 22:29:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.999 22:29:05 env -- scripts/common.sh@355 -- # echo 2 00:06:01.999 22:29:05 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.999 22:29:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.999 22:29:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.999 22:29:05 env -- scripts/common.sh@368 -- # return 0 00:06:01.999 22:29:05 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.999 22:29:05 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:01.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.999 --rc genhtml_branch_coverage=1 00:06:01.999 --rc genhtml_function_coverage=1 00:06:01.999 --rc genhtml_legend=1 00:06:01.999 --rc geninfo_all_blocks=1 00:06:01.999 --rc geninfo_unexecuted_blocks=1 00:06:01.999 00:06:01.999 ' 00:06:01.999 22:29:05 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:01.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.999 --rc genhtml_branch_coverage=1 00:06:01.999 --rc genhtml_function_coverage=1 00:06:01.999 --rc genhtml_legend=1 00:06:01.999 --rc geninfo_all_blocks=1 00:06:01.999 --rc geninfo_unexecuted_blocks=1 00:06:01.999 00:06:01.999 ' 00:06:01.999 22:29:05 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:01.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.999 --rc genhtml_branch_coverage=1 00:06:01.999 --rc genhtml_function_coverage=1 00:06:01.999 --rc genhtml_legend=1 00:06:01.999 --rc geninfo_all_blocks=1 00:06:01.999 --rc geninfo_unexecuted_blocks=1 00:06:01.999 00:06:01.999 ' 00:06:01.999 22:29:05 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:01.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.999 --rc genhtml_branch_coverage=1 00:06:01.999 --rc genhtml_function_coverage=1 00:06:01.999 --rc genhtml_legend=1 00:06:01.999 --rc geninfo_all_blocks=1 00:06:01.999 --rc geninfo_unexecuted_blocks=1 00:06:01.999 00:06:01.999 ' 00:06:01.999 22:29:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:01.999 22:29:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.999 22:29:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.999 22:29:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.999 ************************************ 00:06:01.999 START TEST env_memory 00:06:01.999 ************************************ 00:06:01.999 22:29:05 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:01.999 00:06:01.999 00:06:01.999 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.999 http://cunit.sourceforge.net/ 00:06:01.999 00:06:01.999 00:06:01.999 Suite: memory 00:06:01.999 Test: alloc and free memory map ...[2024-10-11 22:29:05.091978] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:01.999 passed 00:06:01.999 Test: mem map translation ...[2024-10-11 22:29:05.111728] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:01.999 [2024-10-11 22:29:05.111749] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:01.999 [2024-10-11 22:29:05.111804] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:01.999 [2024-10-11 22:29:05.111816] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:01.999 passed 00:06:01.999 Test: mem map registration ...[2024-10-11 22:29:05.152802] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:01.999 [2024-10-11 22:29:05.152821] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:01.999 passed 00:06:01.999 Test: mem map adjacent registrations ...passed 00:06:01.999 00:06:01.999 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.999 suites 1 1 n/a 0 0 00:06:01.999 tests 4 4 4 0 0 00:06:01.999 asserts 152 152 152 0 n/a 00:06:01.999 00:06:01.999 Elapsed time = 0.137 seconds 00:06:01.999 00:06:01.999 real 0m0.146s 00:06:01.999 user 0m0.137s 00:06:01.999 sys 0m0.008s 00:06:01.999 22:29:05 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.999 22:29:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:01.999 ************************************ 00:06:01.999 END TEST env_memory 00:06:01.999 ************************************ 00:06:01.999 22:29:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:01.999 22:29:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.999 22:29:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.999 22:29:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.999 ************************************ 00:06:01.999 START TEST env_vtophys 00:06:01.999 ************************************ 00:06:01.999 22:29:05 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:01.999 EAL: lib.eal log level changed from notice to debug 00:06:01.999 EAL: Detected lcore 0 as core 0 on socket 0 00:06:01.999 EAL: Detected lcore 1 as core 1 on socket 0 00:06:01.999 EAL: Detected lcore 2 as core 2 on socket 0 00:06:01.999 EAL: Detected lcore 3 as core 3 on socket 0 00:06:01.999 EAL: Detected lcore 4 as core 4 on socket 0 00:06:01.999 EAL: Detected lcore 5 as core 5 on socket 0 00:06:01.999 EAL: Detected lcore 6 as core 8 on socket 0 00:06:01.999 EAL: Detected lcore 7 as core 9 on socket 0 00:06:02.259 EAL: Detected lcore 8 as core 10 on socket 0 00:06:02.259 EAL: Detected lcore 9 as core 11 on socket 0 00:06:02.259 EAL: Detected lcore 10 as core 12 on socket 0 00:06:02.259 EAL: Detected lcore 11 as core 13 on socket 0 00:06:02.259 EAL: Detected lcore 12 as core 0 on socket 1 00:06:02.259 EAL: Detected lcore 13 as core 1 on socket 1 00:06:02.259 EAL: Detected lcore 14 as core 2 on socket 1 00:06:02.259 EAL: Detected lcore 15 as core 3 on socket 1 00:06:02.259 EAL: Detected lcore 16 as core 4 on socket 1 00:06:02.259 EAL: Detected lcore 17 as core 5 on socket 1 00:06:02.259 EAL: Detected lcore 18 as core 8 on socket 1 00:06:02.259 EAL: Detected lcore 19 as core 9 on socket 1 00:06:02.259 EAL: Detected lcore 20 as core 10 on socket 1 00:06:02.259 EAL: Detected lcore 21 as core 11 on socket 1 00:06:02.259 EAL: Detected lcore 22 as core 12 on socket 1 00:06:02.259 EAL: Detected lcore 23 as core 13 on socket 1 00:06:02.259 EAL: Detected lcore 24 as core 0 on socket 0 00:06:02.259 EAL: Detected lcore 25 as core 1 on socket 0 00:06:02.259 EAL: Detected lcore 26 as core 2 on socket 0 00:06:02.259 EAL: Detected lcore 27 as core 3 on socket 0 00:06:02.259 EAL: Detected lcore 28 as core 4 on socket 0 00:06:02.259 EAL: Detected lcore 29 as core 5 on socket 0 00:06:02.259 EAL: Detected lcore 30 as core 8 on socket 0 00:06:02.259 EAL: Detected lcore 31 as core 9 on socket 0 00:06:02.259 EAL: Detected lcore 32 as core 10 on socket 0 00:06:02.259 EAL: Detected lcore 33 as core 11 on socket 0 00:06:02.259 EAL: Detected lcore 34 as core 12 on socket 0 00:06:02.259 EAL: Detected lcore 35 as core 13 on socket 0 00:06:02.259 EAL: Detected lcore 36 as core 0 on socket 1 00:06:02.259 EAL: Detected lcore 37 as core 1 on socket 1 00:06:02.259 EAL: Detected lcore 38 as core 2 on socket 1 00:06:02.259 EAL: Detected lcore 39 as core 3 on socket 1 00:06:02.259 EAL: Detected lcore 40 as core 4 on socket 1 00:06:02.259 EAL: Detected lcore 41 as core 5 on socket 1 00:06:02.259 EAL: Detected lcore 42 as core 8 on socket 1 00:06:02.259 EAL: Detected lcore 43 as core 9 on socket 1 00:06:02.259 EAL: Detected lcore 44 as core 10 on socket 1 00:06:02.259 EAL: Detected lcore 45 as core 11 on socket 1 00:06:02.259 EAL: Detected lcore 46 as core 12 on socket 1 00:06:02.259 EAL: Detected lcore 47 as core 13 on socket 1 00:06:02.259 EAL: Maximum logical cores by configuration: 128 00:06:02.259 EAL: Detected CPU lcores: 48 00:06:02.259 EAL: Detected NUMA nodes: 2 00:06:02.259 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:02.259 EAL: Detected shared linkage of DPDK 00:06:02.259 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:02.259 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:02.259 EAL: Registered [vdev] bus. 00:06:02.259 EAL: bus.vdev log level changed from disabled to notice 00:06:02.259 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:02.260 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:02.260 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:02.260 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:02.260 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:02.260 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:02.260 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:02.260 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:02.260 EAL: No shared files mode enabled, IPC will be disabled 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Bus pci wants IOVA as 'DC' 00:06:02.260 EAL: Bus vdev wants IOVA as 'DC' 00:06:02.260 EAL: Buses did not request a specific IOVA mode. 00:06:02.260 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:02.260 EAL: Selected IOVA mode 'VA' 00:06:02.260 EAL: Probing VFIO support... 00:06:02.260 EAL: IOMMU type 1 (Type 1) is supported 00:06:02.260 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:02.260 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:02.260 EAL: VFIO support initialized 00:06:02.260 EAL: Ask a virtual area of 0x2e000 bytes 00:06:02.260 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:02.260 EAL: Setting up physically contiguous memory... 00:06:02.260 EAL: Setting maximum number of open files to 524288 00:06:02.260 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:02.260 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:02.260 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:02.260 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.260 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:02.260 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.260 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.260 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:02.260 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:02.260 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.260 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:02.260 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.260 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.260 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:02.260 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:02.260 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.260 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:02.260 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.260 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.260 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:02.260 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:02.260 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.260 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:02.260 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.260 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.260 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:02.260 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:02.260 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:02.260 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.260 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:02.260 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.260 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.260 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:02.260 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:02.260 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.260 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:02.260 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.260 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.260 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:02.260 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:02.260 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.260 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:02.260 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.260 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.260 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:02.260 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:02.260 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.260 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:02.260 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.260 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.260 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:02.260 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:02.260 EAL: Hugepages will be freed exactly as allocated. 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: TSC frequency is ~2700000 KHz 00:06:02.260 EAL: Main lcore 0 is ready (tid=7f87f34c5a00;cpuset=[0]) 00:06:02.260 EAL: Trying to obtain current memory policy. 00:06:02.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.260 EAL: Restoring previous memory policy: 0 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was expanded by 2MB 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:02.260 EAL: Mem event callback 'spdk:(nil)' registered 00:06:02.260 00:06:02.260 00:06:02.260 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.260 http://cunit.sourceforge.net/ 00:06:02.260 00:06:02.260 00:06:02.260 Suite: components_suite 00:06:02.260 Test: vtophys_malloc_test ...passed 00:06:02.260 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:02.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.260 EAL: Restoring previous memory policy: 4 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was expanded by 4MB 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was shrunk by 4MB 00:06:02.260 EAL: Trying to obtain current memory policy. 00:06:02.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.260 EAL: Restoring previous memory policy: 4 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was expanded by 6MB 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was shrunk by 6MB 00:06:02.260 EAL: Trying to obtain current memory policy. 00:06:02.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.260 EAL: Restoring previous memory policy: 4 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was expanded by 10MB 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was shrunk by 10MB 00:06:02.260 EAL: Trying to obtain current memory policy. 00:06:02.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.260 EAL: Restoring previous memory policy: 4 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was expanded by 18MB 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was shrunk by 18MB 00:06:02.260 EAL: Trying to obtain current memory policy. 00:06:02.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.260 EAL: Restoring previous memory policy: 4 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was expanded by 34MB 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was shrunk by 34MB 00:06:02.260 EAL: Trying to obtain current memory policy. 00:06:02.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.260 EAL: Restoring previous memory policy: 4 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was expanded by 66MB 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was shrunk by 66MB 00:06:02.260 EAL: Trying to obtain current memory policy. 00:06:02.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.260 EAL: Restoring previous memory policy: 4 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was expanded by 130MB 00:06:02.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.260 EAL: request: mp_malloc_sync 00:06:02.260 EAL: No shared files mode enabled, IPC is disabled 00:06:02.260 EAL: Heap on socket 0 was shrunk by 130MB 00:06:02.260 EAL: Trying to obtain current memory policy. 00:06:02.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.519 EAL: Restoring previous memory policy: 4 00:06:02.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.519 EAL: request: mp_malloc_sync 00:06:02.519 EAL: No shared files mode enabled, IPC is disabled 00:06:02.519 EAL: Heap on socket 0 was expanded by 258MB 00:06:02.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.519 EAL: request: mp_malloc_sync 00:06:02.519 EAL: No shared files mode enabled, IPC is disabled 00:06:02.519 EAL: Heap on socket 0 was shrunk by 258MB 00:06:02.519 EAL: Trying to obtain current memory policy. 00:06:02.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.519 EAL: Restoring previous memory policy: 4 00:06:02.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.519 EAL: request: mp_malloc_sync 00:06:02.519 EAL: No shared files mode enabled, IPC is disabled 00:06:02.519 EAL: Heap on socket 0 was expanded by 514MB 00:06:02.778 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.778 EAL: request: mp_malloc_sync 00:06:02.778 EAL: No shared files mode enabled, IPC is disabled 00:06:02.778 EAL: Heap on socket 0 was shrunk by 514MB 00:06:02.778 EAL: Trying to obtain current memory policy. 00:06:02.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.036 EAL: Restoring previous memory policy: 4 00:06:03.036 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.036 EAL: request: mp_malloc_sync 00:06:03.036 EAL: No shared files mode enabled, IPC is disabled 00:06:03.036 EAL: Heap on socket 0 was expanded by 1026MB 00:06:03.293 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.552 EAL: request: mp_malloc_sync 00:06:03.552 EAL: No shared files mode enabled, IPC is disabled 00:06:03.552 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:03.552 passed 00:06:03.552 00:06:03.552 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.552 suites 1 1 n/a 0 0 00:06:03.552 tests 2 2 2 0 0 00:06:03.552 asserts 497 497 497 0 n/a 00:06:03.552 00:06:03.552 Elapsed time = 1.306 seconds 00:06:03.552 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.552 EAL: request: mp_malloc_sync 00:06:03.552 EAL: No shared files mode enabled, IPC is disabled 00:06:03.552 EAL: Heap on socket 0 was shrunk by 2MB 00:06:03.552 EAL: No shared files mode enabled, IPC is disabled 00:06:03.552 EAL: No shared files mode enabled, IPC is disabled 00:06:03.552 EAL: No shared files mode enabled, IPC is disabled 00:06:03.552 00:06:03.552 real 0m1.421s 00:06:03.552 user 0m0.828s 00:06:03.552 sys 0m0.557s 00:06:03.552 22:29:06 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.552 22:29:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:03.552 ************************************ 00:06:03.552 END TEST env_vtophys 00:06:03.552 ************************************ 00:06:03.552 22:29:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:03.552 22:29:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.552 22:29:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.552 22:29:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.552 ************************************ 00:06:03.552 START TEST env_pci 00:06:03.552 ************************************ 00:06:03.552 22:29:06 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:03.552 00:06:03.552 00:06:03.552 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.552 http://cunit.sourceforge.net/ 00:06:03.552 00:06:03.552 00:06:03.552 Suite: pci 00:06:03.552 Test: pci_hook ...[2024-10-11 22:29:06.732158] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 99116 has claimed it 00:06:03.552 EAL: Cannot find device (10000:00:01.0) 00:06:03.552 EAL: Failed to attach device on primary process 00:06:03.552 passed 00:06:03.552 00:06:03.552 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.552 suites 1 1 n/a 0 0 00:06:03.552 tests 1 1 1 0 0 00:06:03.552 asserts 25 25 25 0 n/a 00:06:03.552 00:06:03.552 Elapsed time = 0.021 seconds 00:06:03.552 00:06:03.552 real 0m0.034s 00:06:03.552 user 0m0.010s 00:06:03.552 sys 0m0.024s 00:06:03.552 22:29:06 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.552 22:29:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:03.552 ************************************ 00:06:03.552 END TEST env_pci 00:06:03.552 ************************************ 00:06:03.552 22:29:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:03.552 22:29:06 env -- env/env.sh@15 -- # uname 00:06:03.552 22:29:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:03.552 22:29:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:03.552 22:29:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:03.552 22:29:06 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:03.552 22:29:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.552 22:29:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.552 ************************************ 00:06:03.552 START TEST env_dpdk_post_init 00:06:03.552 ************************************ 00:06:03.552 22:29:06 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:03.812 EAL: Detected CPU lcores: 48 00:06:03.813 EAL: Detected NUMA nodes: 2 00:06:03.813 EAL: Detected shared linkage of DPDK 00:06:03.813 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:03.813 EAL: Selected IOVA mode 'VA' 00:06:03.813 EAL: VFIO support initialized 00:06:03.813 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:03.813 EAL: Using IOMMU type 1 (Type 1) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:03.813 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:04.074 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:04.074 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:04.644 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:07.928 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:07.928 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:08.187 Starting DPDK initialization... 00:06:08.187 Starting SPDK post initialization... 00:06:08.187 SPDK NVMe probe 00:06:08.187 Attaching to 0000:88:00.0 00:06:08.187 Attached to 0000:88:00.0 00:06:08.187 Cleaning up... 00:06:08.187 00:06:08.187 real 0m4.419s 00:06:08.187 user 0m3.310s 00:06:08.187 sys 0m0.170s 00:06:08.187 22:29:11 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.187 22:29:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.187 ************************************ 00:06:08.187 END TEST env_dpdk_post_init 00:06:08.187 ************************************ 00:06:08.187 22:29:11 env -- env/env.sh@26 -- # uname 00:06:08.187 22:29:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:08.187 22:29:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.187 22:29:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.187 22:29:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.187 22:29:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.187 ************************************ 00:06:08.187 START TEST env_mem_callbacks 00:06:08.187 ************************************ 00:06:08.187 22:29:11 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.187 EAL: Detected CPU lcores: 48 00:06:08.187 EAL: Detected NUMA nodes: 2 00:06:08.187 EAL: Detected shared linkage of DPDK 00:06:08.187 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:08.187 EAL: Selected IOVA mode 'VA' 00:06:08.187 EAL: VFIO support initialized 00:06:08.187 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:08.187 00:06:08.187 00:06:08.187 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.188 http://cunit.sourceforge.net/ 00:06:08.188 00:06:08.188 00:06:08.188 Suite: memory 00:06:08.188 Test: test ... 00:06:08.188 register 0x200000200000 2097152 00:06:08.188 malloc 3145728 00:06:08.188 register 0x200000400000 4194304 00:06:08.188 buf 0x200000500000 len 3145728 PASSED 00:06:08.188 malloc 64 00:06:08.188 buf 0x2000004fff40 len 64 PASSED 00:06:08.188 malloc 4194304 00:06:08.188 register 0x200000800000 6291456 00:06:08.188 buf 0x200000a00000 len 4194304 PASSED 00:06:08.188 free 0x200000500000 3145728 00:06:08.188 free 0x2000004fff40 64 00:06:08.188 unregister 0x200000400000 4194304 PASSED 00:06:08.188 free 0x200000a00000 4194304 00:06:08.188 unregister 0x200000800000 6291456 PASSED 00:06:08.188 malloc 8388608 00:06:08.188 register 0x200000400000 10485760 00:06:08.188 buf 0x200000600000 len 8388608 PASSED 00:06:08.188 free 0x200000600000 8388608 00:06:08.188 unregister 0x200000400000 10485760 PASSED 00:06:08.188 passed 00:06:08.188 00:06:08.188 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.188 suites 1 1 n/a 0 0 00:06:08.188 tests 1 1 1 0 0 00:06:08.188 asserts 15 15 15 0 n/a 00:06:08.188 00:06:08.188 Elapsed time = 0.004 seconds 00:06:08.188 00:06:08.188 real 0m0.048s 00:06:08.188 user 0m0.013s 00:06:08.188 sys 0m0.035s 00:06:08.188 22:29:11 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.188 22:29:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:08.188 ************************************ 00:06:08.188 END TEST env_mem_callbacks 00:06:08.188 ************************************ 00:06:08.188 00:06:08.188 real 0m6.462s 00:06:08.188 user 0m4.491s 00:06:08.188 sys 0m1.017s 00:06:08.188 22:29:11 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.188 22:29:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.188 ************************************ 00:06:08.188 END TEST env 00:06:08.188 ************************************ 00:06:08.188 22:29:11 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:08.188 22:29:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.188 22:29:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.188 22:29:11 -- common/autotest_common.sh@10 -- # set +x 00:06:08.188 ************************************ 00:06:08.188 START TEST rpc 00:06:08.188 ************************************ 00:06:08.188 22:29:11 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:08.188 * Looking for test storage... 00:06:08.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:08.188 22:29:11 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.188 22:29:11 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.188 22:29:11 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.447 22:29:11 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.447 22:29:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.447 22:29:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.447 22:29:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.447 22:29:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.447 22:29:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.447 22:29:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.447 22:29:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.447 22:29:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.447 22:29:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.447 22:29:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.447 22:29:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.447 22:29:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:08.447 22:29:11 rpc -- scripts/common.sh@345 -- # : 1 00:06:08.447 22:29:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.447 22:29:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.447 22:29:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:08.447 22:29:11 rpc -- scripts/common.sh@353 -- # local d=1 00:06:08.447 22:29:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.447 22:29:11 rpc -- scripts/common.sh@355 -- # echo 1 00:06:08.447 22:29:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.447 22:29:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:08.447 22:29:11 rpc -- scripts/common.sh@353 -- # local d=2 00:06:08.447 22:29:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.447 22:29:11 rpc -- scripts/common.sh@355 -- # echo 2 00:06:08.447 22:29:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.447 22:29:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.447 22:29:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.447 22:29:11 rpc -- scripts/common.sh@368 -- # return 0 00:06:08.447 22:29:11 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.447 22:29:11 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.447 --rc genhtml_branch_coverage=1 00:06:08.447 --rc genhtml_function_coverage=1 00:06:08.447 --rc genhtml_legend=1 00:06:08.447 --rc geninfo_all_blocks=1 00:06:08.447 --rc geninfo_unexecuted_blocks=1 00:06:08.447 00:06:08.447 ' 00:06:08.447 22:29:11 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.447 --rc genhtml_branch_coverage=1 00:06:08.447 --rc genhtml_function_coverage=1 00:06:08.447 --rc genhtml_legend=1 00:06:08.447 --rc geninfo_all_blocks=1 00:06:08.447 --rc geninfo_unexecuted_blocks=1 00:06:08.447 00:06:08.447 ' 00:06:08.447 22:29:11 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.447 --rc genhtml_branch_coverage=1 00:06:08.447 --rc genhtml_function_coverage=1 00:06:08.447 --rc genhtml_legend=1 00:06:08.447 --rc geninfo_all_blocks=1 00:06:08.447 --rc geninfo_unexecuted_blocks=1 00:06:08.447 00:06:08.447 ' 00:06:08.447 22:29:11 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.447 --rc genhtml_branch_coverage=1 00:06:08.447 --rc genhtml_function_coverage=1 00:06:08.447 --rc genhtml_legend=1 00:06:08.447 --rc geninfo_all_blocks=1 00:06:08.447 --rc geninfo_unexecuted_blocks=1 00:06:08.447 00:06:08.447 ' 00:06:08.447 22:29:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=99864 00:06:08.447 22:29:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:08.447 22:29:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.447 22:29:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 99864 00:06:08.447 22:29:11 rpc -- common/autotest_common.sh@831 -- # '[' -z 99864 ']' 00:06:08.447 22:29:11 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.447 22:29:11 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.447 22:29:11 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.447 22:29:11 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.447 22:29:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.447 [2024-10-11 22:29:11.599608] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:08.447 [2024-10-11 22:29:11.599702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99864 ] 00:06:08.447 [2024-10-11 22:29:11.659342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.447 [2024-10-11 22:29:11.705394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:08.447 [2024-10-11 22:29:11.705451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 99864' to capture a snapshot of events at runtime. 00:06:08.447 [2024-10-11 22:29:11.705484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.447 [2024-10-11 22:29:11.705496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.447 [2024-10-11 22:29:11.705505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid99864 for offline analysis/debug. 00:06:08.447 [2024-10-11 22:29:11.706146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.706 22:29:11 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.706 22:29:11 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:08.707 22:29:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:08.707 22:29:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:08.707 22:29:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:08.707 22:29:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:08.707 22:29:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.707 22:29:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.707 22:29:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.965 ************************************ 00:06:08.965 START TEST rpc_integrity 00:06:08.965 ************************************ 00:06:08.965 22:29:11 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:08.965 22:29:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:08.965 22:29:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.965 22:29:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.965 22:29:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.965 22:29:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:08.965 22:29:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:08.965 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:08.965 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:08.965 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.965 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.965 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.965 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:08.965 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:08.965 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.965 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.965 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.965 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:08.965 { 00:06:08.965 "name": "Malloc0", 00:06:08.965 "aliases": [ 00:06:08.965 "1da6ac94-fba1-4abd-a13f-fd59dab98a47" 00:06:08.965 ], 00:06:08.965 "product_name": "Malloc disk", 00:06:08.965 "block_size": 512, 00:06:08.965 "num_blocks": 16384, 00:06:08.965 "uuid": "1da6ac94-fba1-4abd-a13f-fd59dab98a47", 00:06:08.965 "assigned_rate_limits": { 00:06:08.966 "rw_ios_per_sec": 0, 00:06:08.966 "rw_mbytes_per_sec": 0, 00:06:08.966 "r_mbytes_per_sec": 0, 00:06:08.966 "w_mbytes_per_sec": 0 00:06:08.966 }, 00:06:08.966 "claimed": false, 00:06:08.966 "zoned": false, 00:06:08.966 "supported_io_types": { 00:06:08.966 "read": true, 00:06:08.966 "write": true, 00:06:08.966 "unmap": true, 00:06:08.966 "flush": true, 00:06:08.966 "reset": true, 00:06:08.966 "nvme_admin": false, 00:06:08.966 "nvme_io": false, 00:06:08.966 "nvme_io_md": false, 00:06:08.966 "write_zeroes": true, 00:06:08.966 "zcopy": true, 00:06:08.966 "get_zone_info": false, 00:06:08.966 "zone_management": false, 00:06:08.966 "zone_append": false, 00:06:08.966 "compare": false, 00:06:08.966 "compare_and_write": false, 00:06:08.966 "abort": true, 00:06:08.966 "seek_hole": false, 00:06:08.966 "seek_data": false, 00:06:08.966 "copy": true, 00:06:08.966 "nvme_iov_md": false 00:06:08.966 }, 00:06:08.966 "memory_domains": [ 00:06:08.966 { 00:06:08.966 "dma_device_id": "system", 00:06:08.966 "dma_device_type": 1 00:06:08.966 }, 00:06:08.966 { 00:06:08.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.966 "dma_device_type": 2 00:06:08.966 } 00:06:08.966 ], 00:06:08.966 "driver_specific": {} 00:06:08.966 } 00:06:08.966 ]' 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.966 [2024-10-11 22:29:12.090273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:08.966 [2024-10-11 22:29:12.090312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:08.966 [2024-10-11 22:29:12.090347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc62c40 00:06:08.966 [2024-10-11 22:29:12.090360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:08.966 [2024-10-11 22:29:12.091675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:08.966 [2024-10-11 22:29:12.091700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:08.966 Passthru0 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:08.966 { 00:06:08.966 "name": "Malloc0", 00:06:08.966 "aliases": [ 00:06:08.966 "1da6ac94-fba1-4abd-a13f-fd59dab98a47" 00:06:08.966 ], 00:06:08.966 "product_name": "Malloc disk", 00:06:08.966 "block_size": 512, 00:06:08.966 "num_blocks": 16384, 00:06:08.966 "uuid": "1da6ac94-fba1-4abd-a13f-fd59dab98a47", 00:06:08.966 "assigned_rate_limits": { 00:06:08.966 "rw_ios_per_sec": 0, 00:06:08.966 "rw_mbytes_per_sec": 0, 00:06:08.966 "r_mbytes_per_sec": 0, 00:06:08.966 "w_mbytes_per_sec": 0 00:06:08.966 }, 00:06:08.966 "claimed": true, 00:06:08.966 "claim_type": "exclusive_write", 00:06:08.966 "zoned": false, 00:06:08.966 "supported_io_types": { 00:06:08.966 "read": true, 00:06:08.966 "write": true, 00:06:08.966 "unmap": true, 00:06:08.966 "flush": true, 00:06:08.966 "reset": true, 00:06:08.966 "nvme_admin": false, 00:06:08.966 "nvme_io": false, 00:06:08.966 "nvme_io_md": false, 00:06:08.966 "write_zeroes": true, 00:06:08.966 "zcopy": true, 00:06:08.966 "get_zone_info": false, 00:06:08.966 "zone_management": false, 00:06:08.966 "zone_append": false, 00:06:08.966 "compare": false, 00:06:08.966 "compare_and_write": false, 00:06:08.966 "abort": true, 00:06:08.966 "seek_hole": false, 00:06:08.966 "seek_data": false, 00:06:08.966 "copy": true, 00:06:08.966 "nvme_iov_md": false 00:06:08.966 }, 00:06:08.966 "memory_domains": [ 00:06:08.966 { 00:06:08.966 "dma_device_id": "system", 00:06:08.966 "dma_device_type": 1 00:06:08.966 }, 00:06:08.966 { 00:06:08.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.966 "dma_device_type": 2 00:06:08.966 } 00:06:08.966 ], 00:06:08.966 "driver_specific": {} 00:06:08.966 }, 00:06:08.966 { 00:06:08.966 "name": "Passthru0", 00:06:08.966 "aliases": [ 00:06:08.966 "f72615a7-2384-5f3d-9d37-aa6288f7ffb9" 00:06:08.966 ], 00:06:08.966 "product_name": "passthru", 00:06:08.966 "block_size": 512, 00:06:08.966 "num_blocks": 16384, 00:06:08.966 "uuid": "f72615a7-2384-5f3d-9d37-aa6288f7ffb9", 00:06:08.966 "assigned_rate_limits": { 00:06:08.966 "rw_ios_per_sec": 0, 00:06:08.966 "rw_mbytes_per_sec": 0, 00:06:08.966 "r_mbytes_per_sec": 0, 00:06:08.966 "w_mbytes_per_sec": 0 00:06:08.966 }, 00:06:08.966 "claimed": false, 00:06:08.966 "zoned": false, 00:06:08.966 "supported_io_types": { 00:06:08.966 "read": true, 00:06:08.966 "write": true, 00:06:08.966 "unmap": true, 00:06:08.966 "flush": true, 00:06:08.966 "reset": true, 00:06:08.966 "nvme_admin": false, 00:06:08.966 "nvme_io": false, 00:06:08.966 "nvme_io_md": false, 00:06:08.966 "write_zeroes": true, 00:06:08.966 "zcopy": true, 00:06:08.966 "get_zone_info": false, 00:06:08.966 "zone_management": false, 00:06:08.966 "zone_append": false, 00:06:08.966 "compare": false, 00:06:08.966 "compare_and_write": false, 00:06:08.966 "abort": true, 00:06:08.966 "seek_hole": false, 00:06:08.966 "seek_data": false, 00:06:08.966 "copy": true, 00:06:08.966 "nvme_iov_md": false 00:06:08.966 }, 00:06:08.966 "memory_domains": [ 00:06:08.966 { 00:06:08.966 "dma_device_id": "system", 00:06:08.966 "dma_device_type": 1 00:06:08.966 }, 00:06:08.966 { 00:06:08.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.966 "dma_device_type": 2 00:06:08.966 } 00:06:08.966 ], 00:06:08.966 "driver_specific": { 00:06:08.966 "passthru": { 00:06:08.966 "name": "Passthru0", 00:06:08.966 "base_bdev_name": "Malloc0" 00:06:08.966 } 00:06:08.966 } 00:06:08.966 } 00:06:08.966 ]' 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:08.966 22:29:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:08.966 00:06:08.966 real 0m0.212s 00:06:08.966 user 0m0.142s 00:06:08.966 sys 0m0.016s 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.966 22:29:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.966 ************************************ 00:06:08.966 END TEST rpc_integrity 00:06:08.966 ************************************ 00:06:08.966 22:29:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:08.966 22:29:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.966 22:29:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.966 22:29:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.225 ************************************ 00:06:09.225 START TEST rpc_plugins 00:06:09.225 ************************************ 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:09.225 22:29:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.225 22:29:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:09.225 22:29:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.225 22:29:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:09.225 { 00:06:09.225 "name": "Malloc1", 00:06:09.225 "aliases": [ 00:06:09.225 "f5e57cef-bb2d-47f5-942b-3f1603feca3c" 00:06:09.225 ], 00:06:09.225 "product_name": "Malloc disk", 00:06:09.225 "block_size": 4096, 00:06:09.225 "num_blocks": 256, 00:06:09.225 "uuid": "f5e57cef-bb2d-47f5-942b-3f1603feca3c", 00:06:09.225 "assigned_rate_limits": { 00:06:09.225 "rw_ios_per_sec": 0, 00:06:09.225 "rw_mbytes_per_sec": 0, 00:06:09.225 "r_mbytes_per_sec": 0, 00:06:09.225 "w_mbytes_per_sec": 0 00:06:09.225 }, 00:06:09.225 "claimed": false, 00:06:09.225 "zoned": false, 00:06:09.225 "supported_io_types": { 00:06:09.225 "read": true, 00:06:09.225 "write": true, 00:06:09.225 "unmap": true, 00:06:09.225 "flush": true, 00:06:09.225 "reset": true, 00:06:09.225 "nvme_admin": false, 00:06:09.225 "nvme_io": false, 00:06:09.225 "nvme_io_md": false, 00:06:09.225 "write_zeroes": true, 00:06:09.225 "zcopy": true, 00:06:09.225 "get_zone_info": false, 00:06:09.225 "zone_management": false, 00:06:09.225 "zone_append": false, 00:06:09.225 "compare": false, 00:06:09.225 "compare_and_write": false, 00:06:09.225 "abort": true, 00:06:09.225 "seek_hole": false, 00:06:09.225 "seek_data": false, 00:06:09.225 "copy": true, 00:06:09.225 "nvme_iov_md": false 00:06:09.225 }, 00:06:09.225 "memory_domains": [ 00:06:09.225 { 00:06:09.225 "dma_device_id": "system", 00:06:09.225 "dma_device_type": 1 00:06:09.225 }, 00:06:09.225 { 00:06:09.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.225 "dma_device_type": 2 00:06:09.225 } 00:06:09.225 ], 00:06:09.225 "driver_specific": {} 00:06:09.225 } 00:06:09.225 ]' 00:06:09.225 22:29:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:09.225 22:29:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:09.225 22:29:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.225 22:29:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.225 22:29:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:09.225 22:29:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:09.225 22:29:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:09.225 00:06:09.225 real 0m0.107s 00:06:09.225 user 0m0.067s 00:06:09.225 sys 0m0.010s 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.225 22:29:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.225 ************************************ 00:06:09.225 END TEST rpc_plugins 00:06:09.225 ************************************ 00:06:09.225 22:29:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:09.225 22:29:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.225 22:29:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.225 22:29:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.225 ************************************ 00:06:09.225 START TEST rpc_trace_cmd_test 00:06:09.225 ************************************ 00:06:09.225 22:29:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:09.225 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:09.225 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:09.225 22:29:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.225 22:29:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.225 22:29:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.225 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:09.225 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid99864", 00:06:09.225 "tpoint_group_mask": "0x8", 00:06:09.225 "iscsi_conn": { 00:06:09.225 "mask": "0x2", 00:06:09.225 "tpoint_mask": "0x0" 00:06:09.225 }, 00:06:09.225 "scsi": { 00:06:09.225 "mask": "0x4", 00:06:09.225 "tpoint_mask": "0x0" 00:06:09.225 }, 00:06:09.225 "bdev": { 00:06:09.225 "mask": "0x8", 00:06:09.226 "tpoint_mask": "0xffffffffffffffff" 00:06:09.226 }, 00:06:09.226 "nvmf_rdma": { 00:06:09.226 "mask": "0x10", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "nvmf_tcp": { 00:06:09.226 "mask": "0x20", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "ftl": { 00:06:09.226 "mask": "0x40", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "blobfs": { 00:06:09.226 "mask": "0x80", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "dsa": { 00:06:09.226 "mask": "0x200", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "thread": { 00:06:09.226 "mask": "0x400", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "nvme_pcie": { 00:06:09.226 "mask": "0x800", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "iaa": { 00:06:09.226 "mask": "0x1000", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "nvme_tcp": { 00:06:09.226 "mask": "0x2000", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "bdev_nvme": { 00:06:09.226 "mask": "0x4000", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "sock": { 00:06:09.226 "mask": "0x8000", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "blob": { 00:06:09.226 "mask": "0x10000", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "bdev_raid": { 00:06:09.226 "mask": "0x20000", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 }, 00:06:09.226 "scheduler": { 00:06:09.226 "mask": "0x40000", 00:06:09.226 "tpoint_mask": "0x0" 00:06:09.226 } 00:06:09.226 }' 00:06:09.226 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:09.226 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:09.226 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:09.226 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:09.226 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:09.485 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:09.485 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:09.485 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:09.485 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:09.485 22:29:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:09.485 00:06:09.485 real 0m0.185s 00:06:09.485 user 0m0.159s 00:06:09.485 sys 0m0.015s 00:06:09.485 22:29:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.485 22:29:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.485 ************************************ 00:06:09.485 END TEST rpc_trace_cmd_test 00:06:09.485 ************************************ 00:06:09.485 22:29:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:09.485 22:29:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:09.485 22:29:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:09.485 22:29:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.485 22:29:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.485 22:29:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.485 ************************************ 00:06:09.485 START TEST rpc_daemon_integrity 00:06:09.485 ************************************ 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:09.485 { 00:06:09.485 "name": "Malloc2", 00:06:09.485 "aliases": [ 00:06:09.485 "b1101b04-0437-4ded-a442-7d1fb742872a" 00:06:09.485 ], 00:06:09.485 "product_name": "Malloc disk", 00:06:09.485 "block_size": 512, 00:06:09.485 "num_blocks": 16384, 00:06:09.485 "uuid": "b1101b04-0437-4ded-a442-7d1fb742872a", 00:06:09.485 "assigned_rate_limits": { 00:06:09.485 "rw_ios_per_sec": 0, 00:06:09.485 "rw_mbytes_per_sec": 0, 00:06:09.485 "r_mbytes_per_sec": 0, 00:06:09.485 "w_mbytes_per_sec": 0 00:06:09.485 }, 00:06:09.485 "claimed": false, 00:06:09.485 "zoned": false, 00:06:09.485 "supported_io_types": { 00:06:09.485 "read": true, 00:06:09.485 "write": true, 00:06:09.485 "unmap": true, 00:06:09.485 "flush": true, 00:06:09.485 "reset": true, 00:06:09.485 "nvme_admin": false, 00:06:09.485 "nvme_io": false, 00:06:09.485 "nvme_io_md": false, 00:06:09.485 "write_zeroes": true, 00:06:09.485 "zcopy": true, 00:06:09.485 "get_zone_info": false, 00:06:09.485 "zone_management": false, 00:06:09.485 "zone_append": false, 00:06:09.485 "compare": false, 00:06:09.485 "compare_and_write": false, 00:06:09.485 "abort": true, 00:06:09.485 "seek_hole": false, 00:06:09.485 "seek_data": false, 00:06:09.485 "copy": true, 00:06:09.485 "nvme_iov_md": false 00:06:09.485 }, 00:06:09.485 "memory_domains": [ 00:06:09.485 { 00:06:09.485 "dma_device_id": "system", 00:06:09.485 "dma_device_type": 1 00:06:09.485 }, 00:06:09.485 { 00:06:09.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.485 "dma_device_type": 2 00:06:09.485 } 00:06:09.485 ], 00:06:09.485 "driver_specific": {} 00:06:09.485 } 00:06:09.485 ]' 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.485 [2024-10-11 22:29:12.736196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:09.485 [2024-10-11 22:29:12.736251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:09.485 [2024-10-11 22:29:12.736275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc66950 00:06:09.485 [2024-10-11 22:29:12.736289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:09.485 [2024-10-11 22:29:12.737455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:09.485 [2024-10-11 22:29:12.737477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:09.485 Passthru0 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.485 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.743 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.743 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:09.743 { 00:06:09.743 "name": "Malloc2", 00:06:09.743 "aliases": [ 00:06:09.743 "b1101b04-0437-4ded-a442-7d1fb742872a" 00:06:09.743 ], 00:06:09.743 "product_name": "Malloc disk", 00:06:09.743 "block_size": 512, 00:06:09.743 "num_blocks": 16384, 00:06:09.743 "uuid": "b1101b04-0437-4ded-a442-7d1fb742872a", 00:06:09.743 "assigned_rate_limits": { 00:06:09.743 "rw_ios_per_sec": 0, 00:06:09.743 "rw_mbytes_per_sec": 0, 00:06:09.743 "r_mbytes_per_sec": 0, 00:06:09.743 "w_mbytes_per_sec": 0 00:06:09.743 }, 00:06:09.743 "claimed": true, 00:06:09.743 "claim_type": "exclusive_write", 00:06:09.743 "zoned": false, 00:06:09.743 "supported_io_types": { 00:06:09.743 "read": true, 00:06:09.743 "write": true, 00:06:09.743 "unmap": true, 00:06:09.743 "flush": true, 00:06:09.743 "reset": true, 00:06:09.743 "nvme_admin": false, 00:06:09.743 "nvme_io": false, 00:06:09.744 "nvme_io_md": false, 00:06:09.744 "write_zeroes": true, 00:06:09.744 "zcopy": true, 00:06:09.744 "get_zone_info": false, 00:06:09.744 "zone_management": false, 00:06:09.744 "zone_append": false, 00:06:09.744 "compare": false, 00:06:09.744 "compare_and_write": false, 00:06:09.744 "abort": true, 00:06:09.744 "seek_hole": false, 00:06:09.744 "seek_data": false, 00:06:09.744 "copy": true, 00:06:09.744 "nvme_iov_md": false 00:06:09.744 }, 00:06:09.744 "memory_domains": [ 00:06:09.744 { 00:06:09.744 "dma_device_id": "system", 00:06:09.744 "dma_device_type": 1 00:06:09.744 }, 00:06:09.744 { 00:06:09.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.744 "dma_device_type": 2 00:06:09.744 } 00:06:09.744 ], 00:06:09.744 "driver_specific": {} 00:06:09.744 }, 00:06:09.744 { 00:06:09.744 "name": "Passthru0", 00:06:09.744 "aliases": [ 00:06:09.744 "ec53fe34-934f-5e42-a1a6-48708f0bfa16" 00:06:09.744 ], 00:06:09.744 "product_name": "passthru", 00:06:09.744 "block_size": 512, 00:06:09.744 "num_blocks": 16384, 00:06:09.744 "uuid": "ec53fe34-934f-5e42-a1a6-48708f0bfa16", 00:06:09.744 "assigned_rate_limits": { 00:06:09.744 "rw_ios_per_sec": 0, 00:06:09.744 "rw_mbytes_per_sec": 0, 00:06:09.744 "r_mbytes_per_sec": 0, 00:06:09.744 "w_mbytes_per_sec": 0 00:06:09.744 }, 00:06:09.744 "claimed": false, 00:06:09.744 "zoned": false, 00:06:09.744 "supported_io_types": { 00:06:09.744 "read": true, 00:06:09.744 "write": true, 00:06:09.744 "unmap": true, 00:06:09.744 "flush": true, 00:06:09.744 "reset": true, 00:06:09.744 "nvme_admin": false, 00:06:09.744 "nvme_io": false, 00:06:09.744 "nvme_io_md": false, 00:06:09.744 "write_zeroes": true, 00:06:09.744 "zcopy": true, 00:06:09.744 "get_zone_info": false, 00:06:09.744 "zone_management": false, 00:06:09.744 "zone_append": false, 00:06:09.744 "compare": false, 00:06:09.744 "compare_and_write": false, 00:06:09.744 "abort": true, 00:06:09.744 "seek_hole": false, 00:06:09.744 "seek_data": false, 00:06:09.744 "copy": true, 00:06:09.744 "nvme_iov_md": false 00:06:09.744 }, 00:06:09.744 "memory_domains": [ 00:06:09.744 { 00:06:09.744 "dma_device_id": "system", 00:06:09.744 "dma_device_type": 1 00:06:09.744 }, 00:06:09.744 { 00:06:09.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.744 "dma_device_type": 2 00:06:09.744 } 00:06:09.744 ], 00:06:09.744 "driver_specific": { 00:06:09.744 "passthru": { 00:06:09.744 "name": "Passthru0", 00:06:09.744 "base_bdev_name": "Malloc2" 00:06:09.744 } 00:06:09.744 } 00:06:09.744 } 00:06:09.744 ]' 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:09.744 00:06:09.744 real 0m0.215s 00:06:09.744 user 0m0.139s 00:06:09.744 sys 0m0.024s 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.744 22:29:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.744 ************************************ 00:06:09.744 END TEST rpc_daemon_integrity 00:06:09.744 ************************************ 00:06:09.744 22:29:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:09.744 22:29:12 rpc -- rpc/rpc.sh@84 -- # killprocess 99864 00:06:09.744 22:29:12 rpc -- common/autotest_common.sh@950 -- # '[' -z 99864 ']' 00:06:09.744 22:29:12 rpc -- common/autotest_common.sh@954 -- # kill -0 99864 00:06:09.744 22:29:12 rpc -- common/autotest_common.sh@955 -- # uname 00:06:09.744 22:29:12 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.744 22:29:12 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99864 00:06:09.744 22:29:12 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.744 22:29:12 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.744 22:29:12 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99864' 00:06:09.744 killing process with pid 99864 00:06:09.744 22:29:12 rpc -- common/autotest_common.sh@969 -- # kill 99864 00:06:09.744 22:29:12 rpc -- common/autotest_common.sh@974 -- # wait 99864 00:06:10.312 00:06:10.312 real 0m1.878s 00:06:10.312 user 0m2.349s 00:06:10.312 sys 0m0.587s 00:06:10.312 22:29:13 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.312 22:29:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.312 ************************************ 00:06:10.312 END TEST rpc 00:06:10.312 ************************************ 00:06:10.312 22:29:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.312 22:29:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.312 22:29:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.312 22:29:13 -- common/autotest_common.sh@10 -- # set +x 00:06:10.312 ************************************ 00:06:10.312 START TEST skip_rpc 00:06:10.312 ************************************ 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.312 * Looking for test storage... 00:06:10.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.312 22:29:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:10.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.312 --rc genhtml_branch_coverage=1 00:06:10.312 --rc genhtml_function_coverage=1 00:06:10.312 --rc genhtml_legend=1 00:06:10.312 --rc geninfo_all_blocks=1 00:06:10.312 --rc geninfo_unexecuted_blocks=1 00:06:10.312 00:06:10.312 ' 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:10.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.312 --rc genhtml_branch_coverage=1 00:06:10.312 --rc genhtml_function_coverage=1 00:06:10.312 --rc genhtml_legend=1 00:06:10.312 --rc geninfo_all_blocks=1 00:06:10.312 --rc geninfo_unexecuted_blocks=1 00:06:10.312 00:06:10.312 ' 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:10.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.312 --rc genhtml_branch_coverage=1 00:06:10.312 --rc genhtml_function_coverage=1 00:06:10.312 --rc genhtml_legend=1 00:06:10.312 --rc geninfo_all_blocks=1 00:06:10.312 --rc geninfo_unexecuted_blocks=1 00:06:10.312 00:06:10.312 ' 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:10.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.312 --rc genhtml_branch_coverage=1 00:06:10.312 --rc genhtml_function_coverage=1 00:06:10.312 --rc genhtml_legend=1 00:06:10.312 --rc geninfo_all_blocks=1 00:06:10.312 --rc geninfo_unexecuted_blocks=1 00:06:10.312 00:06:10.312 ' 00:06:10.312 22:29:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:10.312 22:29:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.312 22:29:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.312 22:29:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.312 ************************************ 00:06:10.312 START TEST skip_rpc 00:06:10.312 ************************************ 00:06:10.312 22:29:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:10.312 22:29:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=100225 00:06:10.312 22:29:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:10.312 22:29:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.312 22:29:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:10.312 [2024-10-11 22:29:13.557382] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:10.312 [2024-10-11 22:29:13.557458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100225 ] 00:06:10.571 [2024-10-11 22:29:13.613943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.571 [2024-10-11 22:29:13.660200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 100225 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 100225 ']' 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 100225 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100225 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100225' 00:06:15.838 killing process with pid 100225 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 100225 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 100225 00:06:15.838 00:06:15.838 real 0m5.403s 00:06:15.838 user 0m5.112s 00:06:15.838 sys 0m0.300s 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.838 22:29:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.838 ************************************ 00:06:15.838 END TEST skip_rpc 00:06:15.838 ************************************ 00:06:15.838 22:29:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:15.838 22:29:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.838 22:29:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.838 22:29:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.838 ************************************ 00:06:15.838 START TEST skip_rpc_with_json 00:06:15.838 ************************************ 00:06:15.838 22:29:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:15.838 22:29:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:15.838 22:29:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=100913 00:06:15.838 22:29:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.838 22:29:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.838 22:29:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 100913 00:06:15.838 22:29:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 100913 ']' 00:06:15.838 22:29:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.838 22:29:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.838 22:29:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.838 22:29:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.838 22:29:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.838 [2024-10-11 22:29:19.017822] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:15.838 [2024-10-11 22:29:19.017917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100913 ] 00:06:15.838 [2024-10-11 22:29:19.076280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.097 [2024-10-11 22:29:19.127260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.355 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.355 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:16.355 22:29:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:16.355 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.355 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.355 [2024-10-11 22:29:19.388060] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:16.355 request: 00:06:16.355 { 00:06:16.355 "trtype": "tcp", 00:06:16.355 "method": "nvmf_get_transports", 00:06:16.355 "req_id": 1 00:06:16.355 } 00:06:16.355 Got JSON-RPC error response 00:06:16.355 response: 00:06:16.355 { 00:06:16.355 "code": -19, 00:06:16.356 "message": "No such device" 00:06:16.356 } 00:06:16.356 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:16.356 22:29:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:16.356 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.356 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.356 [2024-10-11 22:29:19.396170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.356 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.356 22:29:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:16.356 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.356 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.356 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.356 22:29:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.356 { 00:06:16.356 "subsystems": [ 00:06:16.356 { 00:06:16.356 "subsystem": "fsdev", 00:06:16.356 "config": [ 00:06:16.356 { 00:06:16.356 "method": "fsdev_set_opts", 00:06:16.356 "params": { 00:06:16.356 "fsdev_io_pool_size": 65535, 00:06:16.356 "fsdev_io_cache_size": 256 00:06:16.356 } 00:06:16.356 } 00:06:16.356 ] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "vfio_user_target", 00:06:16.356 "config": null 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "keyring", 00:06:16.356 "config": [] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "iobuf", 00:06:16.356 "config": [ 00:06:16.356 { 00:06:16.356 "method": "iobuf_set_options", 00:06:16.356 "params": { 00:06:16.356 "small_pool_count": 8192, 00:06:16.356 "large_pool_count": 1024, 00:06:16.356 "small_bufsize": 8192, 00:06:16.356 "large_bufsize": 135168 00:06:16.356 } 00:06:16.356 } 00:06:16.356 ] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "sock", 00:06:16.356 "config": [ 00:06:16.356 { 00:06:16.356 "method": "sock_set_default_impl", 00:06:16.356 "params": { 00:06:16.356 "impl_name": "posix" 00:06:16.356 } 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "method": "sock_impl_set_options", 00:06:16.356 "params": { 00:06:16.356 "impl_name": "ssl", 00:06:16.356 "recv_buf_size": 4096, 00:06:16.356 "send_buf_size": 4096, 00:06:16.356 "enable_recv_pipe": true, 00:06:16.356 "enable_quickack": false, 00:06:16.356 "enable_placement_id": 0, 00:06:16.356 "enable_zerocopy_send_server": true, 00:06:16.356 "enable_zerocopy_send_client": false, 00:06:16.356 "zerocopy_threshold": 0, 00:06:16.356 "tls_version": 0, 00:06:16.356 "enable_ktls": false 00:06:16.356 } 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "method": "sock_impl_set_options", 00:06:16.356 "params": { 00:06:16.356 "impl_name": "posix", 00:06:16.356 "recv_buf_size": 2097152, 00:06:16.356 "send_buf_size": 2097152, 00:06:16.356 "enable_recv_pipe": true, 00:06:16.356 "enable_quickack": false, 00:06:16.356 "enable_placement_id": 0, 00:06:16.356 "enable_zerocopy_send_server": true, 00:06:16.356 "enable_zerocopy_send_client": false, 00:06:16.356 "zerocopy_threshold": 0, 00:06:16.356 "tls_version": 0, 00:06:16.356 "enable_ktls": false 00:06:16.356 } 00:06:16.356 } 00:06:16.356 ] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "vmd", 00:06:16.356 "config": [] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "accel", 00:06:16.356 "config": [ 00:06:16.356 { 00:06:16.356 "method": "accel_set_options", 00:06:16.356 "params": { 00:06:16.356 "small_cache_size": 128, 00:06:16.356 "large_cache_size": 16, 00:06:16.356 "task_count": 2048, 00:06:16.356 "sequence_count": 2048, 00:06:16.356 "buf_count": 2048 00:06:16.356 } 00:06:16.356 } 00:06:16.356 ] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "bdev", 00:06:16.356 "config": [ 00:06:16.356 { 00:06:16.356 "method": "bdev_set_options", 00:06:16.356 "params": { 00:06:16.356 "bdev_io_pool_size": 65535, 00:06:16.356 "bdev_io_cache_size": 256, 00:06:16.356 "bdev_auto_examine": true, 00:06:16.356 "iobuf_small_cache_size": 128, 00:06:16.356 "iobuf_large_cache_size": 16 00:06:16.356 } 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "method": "bdev_raid_set_options", 00:06:16.356 "params": { 00:06:16.356 "process_window_size_kb": 1024, 00:06:16.356 "process_max_bandwidth_mb_sec": 0 00:06:16.356 } 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "method": "bdev_iscsi_set_options", 00:06:16.356 "params": { 00:06:16.356 "timeout_sec": 30 00:06:16.356 } 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "method": "bdev_nvme_set_options", 00:06:16.356 "params": { 00:06:16.356 "action_on_timeout": "none", 00:06:16.356 "timeout_us": 0, 00:06:16.356 "timeout_admin_us": 0, 00:06:16.356 "keep_alive_timeout_ms": 10000, 00:06:16.356 "arbitration_burst": 0, 00:06:16.356 "low_priority_weight": 0, 00:06:16.356 "medium_priority_weight": 0, 00:06:16.356 "high_priority_weight": 0, 00:06:16.356 "nvme_adminq_poll_period_us": 10000, 00:06:16.356 "nvme_ioq_poll_period_us": 0, 00:06:16.356 "io_queue_requests": 0, 00:06:16.356 "delay_cmd_submit": true, 00:06:16.356 "transport_retry_count": 4, 00:06:16.356 "bdev_retry_count": 3, 00:06:16.356 "transport_ack_timeout": 0, 00:06:16.356 "ctrlr_loss_timeout_sec": 0, 00:06:16.356 "reconnect_delay_sec": 0, 00:06:16.356 "fast_io_fail_timeout_sec": 0, 00:06:16.356 "disable_auto_failback": false, 00:06:16.356 "generate_uuids": false, 00:06:16.356 "transport_tos": 0, 00:06:16.356 "nvme_error_stat": false, 00:06:16.356 "rdma_srq_size": 0, 00:06:16.356 "io_path_stat": false, 00:06:16.356 "allow_accel_sequence": false, 00:06:16.356 "rdma_max_cq_size": 0, 00:06:16.356 "rdma_cm_event_timeout_ms": 0, 00:06:16.356 "dhchap_digests": [ 00:06:16.356 "sha256", 00:06:16.356 "sha384", 00:06:16.356 "sha512" 00:06:16.356 ], 00:06:16.356 "dhchap_dhgroups": [ 00:06:16.356 "null", 00:06:16.356 "ffdhe2048", 00:06:16.356 "ffdhe3072", 00:06:16.356 "ffdhe4096", 00:06:16.356 "ffdhe6144", 00:06:16.356 "ffdhe8192" 00:06:16.356 ] 00:06:16.356 } 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "method": "bdev_nvme_set_hotplug", 00:06:16.356 "params": { 00:06:16.356 "period_us": 100000, 00:06:16.356 "enable": false 00:06:16.356 } 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "method": "bdev_wait_for_examine" 00:06:16.356 } 00:06:16.356 ] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "scsi", 00:06:16.356 "config": null 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "scheduler", 00:06:16.356 "config": [ 00:06:16.356 { 00:06:16.356 "method": "framework_set_scheduler", 00:06:16.356 "params": { 00:06:16.356 "name": "static" 00:06:16.356 } 00:06:16.356 } 00:06:16.356 ] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "vhost_scsi", 00:06:16.356 "config": [] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "vhost_blk", 00:06:16.356 "config": [] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "ublk", 00:06:16.356 "config": [] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "nbd", 00:06:16.356 "config": [] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "nvmf", 00:06:16.356 "config": [ 00:06:16.356 { 00:06:16.356 "method": "nvmf_set_config", 00:06:16.356 "params": { 00:06:16.356 "discovery_filter": "match_any", 00:06:16.356 "admin_cmd_passthru": { 00:06:16.356 "identify_ctrlr": false 00:06:16.356 }, 00:06:16.356 "dhchap_digests": [ 00:06:16.356 "sha256", 00:06:16.356 "sha384", 00:06:16.356 "sha512" 00:06:16.356 ], 00:06:16.356 "dhchap_dhgroups": [ 00:06:16.356 "null", 00:06:16.356 "ffdhe2048", 00:06:16.356 "ffdhe3072", 00:06:16.356 "ffdhe4096", 00:06:16.356 "ffdhe6144", 00:06:16.356 "ffdhe8192" 00:06:16.356 ] 00:06:16.356 } 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "method": "nvmf_set_max_subsystems", 00:06:16.356 "params": { 00:06:16.356 "max_subsystems": 1024 00:06:16.356 } 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "method": "nvmf_set_crdt", 00:06:16.356 "params": { 00:06:16.356 "crdt1": 0, 00:06:16.356 "crdt2": 0, 00:06:16.356 "crdt3": 0 00:06:16.356 } 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "method": "nvmf_create_transport", 00:06:16.356 "params": { 00:06:16.356 "trtype": "TCP", 00:06:16.356 "max_queue_depth": 128, 00:06:16.356 "max_io_qpairs_per_ctrlr": 127, 00:06:16.356 "in_capsule_data_size": 4096, 00:06:16.356 "max_io_size": 131072, 00:06:16.356 "io_unit_size": 131072, 00:06:16.356 "max_aq_depth": 128, 00:06:16.356 "num_shared_buffers": 511, 00:06:16.356 "buf_cache_size": 4294967295, 00:06:16.356 "dif_insert_or_strip": false, 00:06:16.356 "zcopy": false, 00:06:16.356 "c2h_success": true, 00:06:16.356 "sock_priority": 0, 00:06:16.356 "abort_timeout_sec": 1, 00:06:16.356 "ack_timeout": 0, 00:06:16.356 "data_wr_pool_size": 0 00:06:16.356 } 00:06:16.356 } 00:06:16.356 ] 00:06:16.356 }, 00:06:16.356 { 00:06:16.356 "subsystem": "iscsi", 00:06:16.356 "config": [ 00:06:16.356 { 00:06:16.356 "method": "iscsi_set_options", 00:06:16.356 "params": { 00:06:16.356 "node_base": "iqn.2016-06.io.spdk", 00:06:16.356 "max_sessions": 128, 00:06:16.356 "max_connections_per_session": 2, 00:06:16.357 "max_queue_depth": 64, 00:06:16.357 "default_time2wait": 2, 00:06:16.357 "default_time2retain": 20, 00:06:16.357 "first_burst_length": 8192, 00:06:16.357 "immediate_data": true, 00:06:16.357 "allow_duplicated_isid": false, 00:06:16.357 "error_recovery_level": 0, 00:06:16.357 "nop_timeout": 60, 00:06:16.357 "nop_in_interval": 30, 00:06:16.357 "disable_chap": false, 00:06:16.357 "require_chap": false, 00:06:16.357 "mutual_chap": false, 00:06:16.357 "chap_group": 0, 00:06:16.357 "max_large_datain_per_connection": 64, 00:06:16.357 "max_r2t_per_connection": 4, 00:06:16.357 "pdu_pool_size": 36864, 00:06:16.357 "immediate_data_pool_size": 16384, 00:06:16.357 "data_out_pool_size": 2048 00:06:16.357 } 00:06:16.357 } 00:06:16.357 ] 00:06:16.357 } 00:06:16.357 ] 00:06:16.357 } 00:06:16.357 22:29:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:16.357 22:29:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 100913 00:06:16.357 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 100913 ']' 00:06:16.357 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 100913 00:06:16.357 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:16.357 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.357 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100913 00:06:16.357 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.357 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.357 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100913' 00:06:16.357 killing process with pid 100913 00:06:16.357 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 100913 00:06:16.357 22:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 100913 00:06:16.924 22:29:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=101052 00:06:16.924 22:29:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.924 22:29:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:22.188 22:29:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 101052 00:06:22.188 22:29:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 101052 ']' 00:06:22.188 22:29:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 101052 00:06:22.188 22:29:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:22.188 22:29:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.188 22:29:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101052 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101052' 00:06:22.188 killing process with pid 101052 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 101052 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 101052 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:22.188 00:06:22.188 real 0m6.412s 00:06:22.188 user 0m6.036s 00:06:22.188 sys 0m0.717s 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.188 ************************************ 00:06:22.188 END TEST skip_rpc_with_json 00:06:22.188 ************************************ 00:06:22.188 22:29:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:22.188 22:29:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.188 22:29:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.188 22:29:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.188 ************************************ 00:06:22.188 START TEST skip_rpc_with_delay 00:06:22.188 ************************************ 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:22.188 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.447 [2024-10-11 22:29:25.484119] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:22.447 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:22.447 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.447 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.447 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.447 00:06:22.447 real 0m0.072s 00:06:22.447 user 0m0.042s 00:06:22.447 sys 0m0.029s 00:06:22.447 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.447 22:29:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:22.447 ************************************ 00:06:22.447 END TEST skip_rpc_with_delay 00:06:22.447 ************************************ 00:06:22.447 22:29:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:22.447 22:29:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:22.447 22:29:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:22.447 22:29:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.447 22:29:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.447 22:29:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.447 ************************************ 00:06:22.447 START TEST exit_on_failed_rpc_init 00:06:22.447 ************************************ 00:06:22.447 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:22.447 22:29:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=101770 00:06:22.447 22:29:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.447 22:29:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 101770 00:06:22.447 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 101770 ']' 00:06:22.447 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.447 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.447 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.447 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.447 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.447 [2024-10-11 22:29:25.608744] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:22.447 [2024-10-11 22:29:25.608839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101770 ] 00:06:22.447 [2024-10-11 22:29:25.666672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.720 [2024-10-11 22:29:25.717744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:22.720 22:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:22.987 [2024-10-11 22:29:26.032724] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:22.987 [2024-10-11 22:29:26.032814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101776 ] 00:06:22.987 [2024-10-11 22:29:26.090026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.987 [2024-10-11 22:29:26.137032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.987 [2024-10-11 22:29:26.137143] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:22.987 [2024-10-11 22:29:26.137162] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:22.987 [2024-10-11 22:29:26.137180] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 101770 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 101770 ']' 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 101770 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101770 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101770' 00:06:22.987 killing process with pid 101770 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 101770 00:06:22.987 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 101770 00:06:23.555 00:06:23.555 real 0m1.052s 00:06:23.555 user 0m1.126s 00:06:23.555 sys 0m0.427s 00:06:23.555 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.555 22:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:23.555 ************************************ 00:06:23.555 END TEST exit_on_failed_rpc_init 00:06:23.555 ************************************ 00:06:23.555 22:29:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:23.555 00:06:23.555 real 0m13.300s 00:06:23.555 user 0m12.502s 00:06:23.555 sys 0m1.669s 00:06:23.555 22:29:26 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.555 22:29:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.555 ************************************ 00:06:23.555 END TEST skip_rpc 00:06:23.555 ************************************ 00:06:23.555 22:29:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:23.555 22:29:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.555 22:29:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.555 22:29:26 -- common/autotest_common.sh@10 -- # set +x 00:06:23.555 ************************************ 00:06:23.555 START TEST rpc_client 00:06:23.555 ************************************ 00:06:23.555 22:29:26 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:23.555 * Looking for test storage... 00:06:23.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:23.555 22:29:26 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.555 22:29:26 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.555 22:29:26 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.555 22:29:26 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:23.555 22:29:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.815 22:29:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:23.815 22:29:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:23.815 22:29:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.815 22:29:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:23.815 22:29:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.815 22:29:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.815 22:29:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.815 22:29:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:23.815 22:29:26 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.815 22:29:26 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.815 --rc genhtml_branch_coverage=1 00:06:23.815 --rc genhtml_function_coverage=1 00:06:23.815 --rc genhtml_legend=1 00:06:23.815 --rc geninfo_all_blocks=1 00:06:23.815 --rc geninfo_unexecuted_blocks=1 00:06:23.815 00:06:23.815 ' 00:06:23.815 22:29:26 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.815 --rc genhtml_branch_coverage=1 00:06:23.815 --rc genhtml_function_coverage=1 00:06:23.815 --rc genhtml_legend=1 00:06:23.815 --rc geninfo_all_blocks=1 00:06:23.815 --rc geninfo_unexecuted_blocks=1 00:06:23.815 00:06:23.815 ' 00:06:23.815 22:29:26 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.815 --rc genhtml_branch_coverage=1 00:06:23.815 --rc genhtml_function_coverage=1 00:06:23.815 --rc genhtml_legend=1 00:06:23.815 --rc geninfo_all_blocks=1 00:06:23.815 --rc geninfo_unexecuted_blocks=1 00:06:23.815 00:06:23.815 ' 00:06:23.815 22:29:26 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.815 --rc genhtml_branch_coverage=1 00:06:23.815 --rc genhtml_function_coverage=1 00:06:23.815 --rc genhtml_legend=1 00:06:23.815 --rc geninfo_all_blocks=1 00:06:23.815 --rc geninfo_unexecuted_blocks=1 00:06:23.815 00:06:23.815 ' 00:06:23.815 22:29:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:23.815 OK 00:06:23.815 22:29:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:23.815 00:06:23.815 real 0m0.161s 00:06:23.815 user 0m0.098s 00:06:23.815 sys 0m0.072s 00:06:23.815 22:29:26 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.815 22:29:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:23.815 ************************************ 00:06:23.815 END TEST rpc_client 00:06:23.815 ************************************ 00:06:23.815 22:29:26 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:23.815 22:29:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.815 22:29:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.815 22:29:26 -- common/autotest_common.sh@10 -- # set +x 00:06:23.815 ************************************ 00:06:23.815 START TEST json_config 00:06:23.815 ************************************ 00:06:23.815 22:29:26 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:23.815 22:29:26 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.815 22:29:26 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.815 22:29:26 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.815 22:29:27 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.815 22:29:27 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.815 22:29:27 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.815 22:29:27 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.815 22:29:27 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.815 22:29:27 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.815 22:29:27 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.815 22:29:27 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.815 22:29:27 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.816 22:29:27 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.816 22:29:27 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.816 22:29:27 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.816 22:29:27 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:23.816 22:29:27 json_config -- scripts/common.sh@345 -- # : 1 00:06:23.816 22:29:27 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.816 22:29:27 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.816 22:29:27 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:23.816 22:29:27 json_config -- scripts/common.sh@353 -- # local d=1 00:06:23.816 22:29:27 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.816 22:29:27 json_config -- scripts/common.sh@355 -- # echo 1 00:06:23.816 22:29:27 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.816 22:29:27 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:23.816 22:29:27 json_config -- scripts/common.sh@353 -- # local d=2 00:06:23.816 22:29:27 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.816 22:29:27 json_config -- scripts/common.sh@355 -- # echo 2 00:06:23.816 22:29:27 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.816 22:29:27 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.816 22:29:27 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.816 22:29:27 json_config -- scripts/common.sh@368 -- # return 0 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.816 --rc genhtml_branch_coverage=1 00:06:23.816 --rc genhtml_function_coverage=1 00:06:23.816 --rc genhtml_legend=1 00:06:23.816 --rc geninfo_all_blocks=1 00:06:23.816 --rc geninfo_unexecuted_blocks=1 00:06:23.816 00:06:23.816 ' 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.816 --rc genhtml_branch_coverage=1 00:06:23.816 --rc genhtml_function_coverage=1 00:06:23.816 --rc genhtml_legend=1 00:06:23.816 --rc geninfo_all_blocks=1 00:06:23.816 --rc geninfo_unexecuted_blocks=1 00:06:23.816 00:06:23.816 ' 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.816 --rc genhtml_branch_coverage=1 00:06:23.816 --rc genhtml_function_coverage=1 00:06:23.816 --rc genhtml_legend=1 00:06:23.816 --rc geninfo_all_blocks=1 00:06:23.816 --rc geninfo_unexecuted_blocks=1 00:06:23.816 00:06:23.816 ' 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.816 --rc genhtml_branch_coverage=1 00:06:23.816 --rc genhtml_function_coverage=1 00:06:23.816 --rc genhtml_legend=1 00:06:23.816 --rc geninfo_all_blocks=1 00:06:23.816 --rc geninfo_unexecuted_blocks=1 00:06:23.816 00:06:23.816 ' 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.816 22:29:27 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.816 22:29:27 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.816 22:29:27 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.816 22:29:27 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.816 22:29:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.816 22:29:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.816 22:29:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.816 22:29:27 json_config -- paths/export.sh@5 -- # export PATH 00:06:23.816 22:29:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@51 -- # : 0 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.816 22:29:27 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:23.816 INFO: JSON configuration test init 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.816 22:29:27 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:23.816 22:29:27 json_config -- json_config/common.sh@9 -- # local app=target 00:06:23.816 22:29:27 json_config -- json_config/common.sh@10 -- # shift 00:06:23.816 22:29:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:23.816 22:29:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:23.816 22:29:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:23.816 22:29:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.816 22:29:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.816 22:29:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=102036 00:06:23.816 22:29:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:23.816 22:29:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:23.816 Waiting for target to run... 00:06:23.816 22:29:27 json_config -- json_config/common.sh@25 -- # waitforlisten 102036 /var/tmp/spdk_tgt.sock 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@831 -- # '[' -z 102036 ']' 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.816 22:29:27 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:23.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:23.817 22:29:27 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.817 22:29:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.075 [2024-10-11 22:29:27.099974] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:24.075 [2024-10-11 22:29:27.100055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102036 ] 00:06:24.642 [2024-10-11 22:29:27.610099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.642 [2024-10-11 22:29:27.649979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.901 22:29:28 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.901 22:29:28 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:24.901 22:29:28 json_config -- json_config/common.sh@26 -- # echo '' 00:06:24.901 00:06:24.901 22:29:28 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:24.901 22:29:28 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:24.901 22:29:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.901 22:29:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.901 22:29:28 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:24.901 22:29:28 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:24.901 22:29:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.901 22:29:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.901 22:29:28 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:24.901 22:29:28 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:24.901 22:29:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:28.189 22:29:31 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:28.189 22:29:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:28.189 22:29:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.189 22:29:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.189 22:29:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:28.189 22:29:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:28.189 22:29:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:28.189 22:29:31 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:28.189 22:29:31 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:28.189 22:29:31 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:28.189 22:29:31 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:28.189 22:29:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:28.447 22:29:31 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:28.447 22:29:31 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:28.447 22:29:31 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@54 -- # sort 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:28.448 22:29:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.448 22:29:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:28.448 22:29:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.448 22:29:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:28.448 22:29:31 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:28.448 22:29:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:28.706 MallocForNvmf0 00:06:28.707 22:29:31 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:28.707 22:29:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:28.965 MallocForNvmf1 00:06:28.965 22:29:32 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:28.965 22:29:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:29.222 [2024-10-11 22:29:32.375601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.222 22:29:32 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:29.222 22:29:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:29.479 22:29:32 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:29.479 22:29:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:29.737 22:29:32 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:29.737 22:29:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:29.994 22:29:33 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:29.994 22:29:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:30.253 [2024-10-11 22:29:33.451134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:30.253 22:29:33 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:30.253 22:29:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.253 22:29:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.253 22:29:33 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:30.253 22:29:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.253 22:29:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.253 22:29:33 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:30.253 22:29:33 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:30.253 22:29:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:30.512 MallocBdevForConfigChangeCheck 00:06:30.512 22:29:33 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:30.512 22:29:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.512 22:29:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.771 22:29:33 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:30.771 22:29:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:31.030 22:29:34 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:31.030 INFO: shutting down applications... 00:06:31.030 22:29:34 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:31.030 22:29:34 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:31.030 22:29:34 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:31.030 22:29:34 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:32.930 Calling clear_iscsi_subsystem 00:06:32.930 Calling clear_nvmf_subsystem 00:06:32.930 Calling clear_nbd_subsystem 00:06:32.930 Calling clear_ublk_subsystem 00:06:32.930 Calling clear_vhost_blk_subsystem 00:06:32.930 Calling clear_vhost_scsi_subsystem 00:06:32.930 Calling clear_bdev_subsystem 00:06:32.930 22:29:35 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:32.930 22:29:35 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:32.930 22:29:35 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:32.930 22:29:35 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:32.930 22:29:35 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:32.930 22:29:35 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:33.189 22:29:36 json_config -- json_config/json_config.sh@352 -- # break 00:06:33.189 22:29:36 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:33.189 22:29:36 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:33.189 22:29:36 json_config -- json_config/common.sh@31 -- # local app=target 00:06:33.189 22:29:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:33.189 22:29:36 json_config -- json_config/common.sh@35 -- # [[ -n 102036 ]] 00:06:33.189 22:29:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 102036 00:06:33.189 22:29:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:33.189 22:29:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.189 22:29:36 json_config -- json_config/common.sh@41 -- # kill -0 102036 00:06:33.189 22:29:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.759 22:29:36 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.759 22:29:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.759 22:29:36 json_config -- json_config/common.sh@41 -- # kill -0 102036 00:06:33.759 22:29:36 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:33.759 22:29:36 json_config -- json_config/common.sh@43 -- # break 00:06:33.759 22:29:36 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:33.759 22:29:36 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:33.759 SPDK target shutdown done 00:06:33.759 22:29:36 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:33.759 INFO: relaunching applications... 00:06:33.759 22:29:36 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.759 22:29:36 json_config -- json_config/common.sh@9 -- # local app=target 00:06:33.759 22:29:36 json_config -- json_config/common.sh@10 -- # shift 00:06:33.759 22:29:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:33.759 22:29:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:33.759 22:29:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:33.759 22:29:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.759 22:29:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.759 22:29:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=103359 00:06:33.760 22:29:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.760 22:29:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:33.760 Waiting for target to run... 00:06:33.760 22:29:36 json_config -- json_config/common.sh@25 -- # waitforlisten 103359 /var/tmp/spdk_tgt.sock 00:06:33.760 22:29:36 json_config -- common/autotest_common.sh@831 -- # '[' -z 103359 ']' 00:06:33.760 22:29:36 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:33.760 22:29:36 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.760 22:29:36 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:33.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:33.760 22:29:36 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.760 22:29:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.760 [2024-10-11 22:29:36.868184] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:33.760 [2024-10-11 22:29:36.868290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103359 ] 00:06:34.328 [2024-10-11 22:29:37.392712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.328 [2024-10-11 22:29:37.433286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.614 [2024-10-11 22:29:40.480139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.614 [2024-10-11 22:29:40.512569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:37.614 22:29:40 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.614 22:29:40 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:37.614 22:29:40 json_config -- json_config/common.sh@26 -- # echo '' 00:06:37.614 00:06:37.614 22:29:40 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:37.614 22:29:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:37.614 INFO: Checking if target configuration is the same... 00:06:37.614 22:29:40 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.614 22:29:40 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:37.614 22:29:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.614 + '[' 2 -ne 2 ']' 00:06:37.614 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:37.614 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:37.614 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:37.614 +++ basename /dev/fd/62 00:06:37.614 ++ mktemp /tmp/62.XXX 00:06:37.614 + tmp_file_1=/tmp/62.HX6 00:06:37.614 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.614 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:37.614 + tmp_file_2=/tmp/spdk_tgt_config.json.EPv 00:06:37.614 + ret=0 00:06:37.614 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:37.873 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:37.873 + diff -u /tmp/62.HX6 /tmp/spdk_tgt_config.json.EPv 00:06:37.873 + echo 'INFO: JSON config files are the same' 00:06:37.873 INFO: JSON config files are the same 00:06:37.873 + rm /tmp/62.HX6 /tmp/spdk_tgt_config.json.EPv 00:06:37.873 + exit 0 00:06:37.873 22:29:41 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:37.873 22:29:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:37.873 INFO: changing configuration and checking if this can be detected... 00:06:37.873 22:29:41 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:37.873 22:29:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:38.131 22:29:41 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.131 22:29:41 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:38.131 22:29:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:38.131 + '[' 2 -ne 2 ']' 00:06:38.131 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:38.131 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:38.131 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:38.131 +++ basename /dev/fd/62 00:06:38.131 ++ mktemp /tmp/62.XXX 00:06:38.131 + tmp_file_1=/tmp/62.Wj5 00:06:38.131 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.131 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:38.131 + tmp_file_2=/tmp/spdk_tgt_config.json.D6j 00:06:38.131 + ret=0 00:06:38.131 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.442 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.701 + diff -u /tmp/62.Wj5 /tmp/spdk_tgt_config.json.D6j 00:06:38.701 + ret=1 00:06:38.701 + echo '=== Start of file: /tmp/62.Wj5 ===' 00:06:38.701 + cat /tmp/62.Wj5 00:06:38.701 + echo '=== End of file: /tmp/62.Wj5 ===' 00:06:38.701 + echo '' 00:06:38.701 + echo '=== Start of file: /tmp/spdk_tgt_config.json.D6j ===' 00:06:38.701 + cat /tmp/spdk_tgt_config.json.D6j 00:06:38.701 + echo '=== End of file: /tmp/spdk_tgt_config.json.D6j ===' 00:06:38.701 + echo '' 00:06:38.701 + rm /tmp/62.Wj5 /tmp/spdk_tgt_config.json.D6j 00:06:38.701 + exit 1 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:38.701 INFO: configuration change detected. 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@324 -- # [[ -n 103359 ]] 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.701 22:29:41 json_config -- json_config/json_config.sh@330 -- # killprocess 103359 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@950 -- # '[' -z 103359 ']' 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@954 -- # kill -0 103359 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@955 -- # uname 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103359 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103359' 00:06:38.701 killing process with pid 103359 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@969 -- # kill 103359 00:06:38.701 22:29:41 json_config -- common/autotest_common.sh@974 -- # wait 103359 00:06:40.606 22:29:43 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:40.606 22:29:43 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:40.606 22:29:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.606 22:29:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.606 22:29:43 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:40.606 22:29:43 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:40.606 INFO: Success 00:06:40.606 00:06:40.606 real 0m16.554s 00:06:40.606 user 0m18.536s 00:06:40.606 sys 0m2.230s 00:06:40.606 22:29:43 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.606 22:29:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.606 ************************************ 00:06:40.606 END TEST json_config 00:06:40.606 ************************************ 00:06:40.606 22:29:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:40.606 22:29:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.606 22:29:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.606 22:29:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.606 ************************************ 00:06:40.606 START TEST json_config_extra_key 00:06:40.606 ************************************ 00:06:40.606 22:29:43 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:40.606 22:29:43 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:40.606 22:29:43 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:40.606 22:29:43 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.606 22:29:43 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.606 22:29:43 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.606 22:29:43 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.606 22:29:43 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.606 22:29:43 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.606 22:29:43 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.606 22:29:43 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.606 22:29:43 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.606 22:29:43 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.606 22:29:43 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.606 22:29:43 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:40.607 22:29:43 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.607 22:29:43 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.607 --rc genhtml_branch_coverage=1 00:06:40.607 --rc genhtml_function_coverage=1 00:06:40.607 --rc genhtml_legend=1 00:06:40.607 --rc geninfo_all_blocks=1 00:06:40.607 --rc geninfo_unexecuted_blocks=1 00:06:40.607 00:06:40.607 ' 00:06:40.607 22:29:43 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.607 --rc genhtml_branch_coverage=1 00:06:40.607 --rc genhtml_function_coverage=1 00:06:40.607 --rc genhtml_legend=1 00:06:40.607 --rc geninfo_all_blocks=1 00:06:40.607 --rc geninfo_unexecuted_blocks=1 00:06:40.607 00:06:40.607 ' 00:06:40.607 22:29:43 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.607 --rc genhtml_branch_coverage=1 00:06:40.607 --rc genhtml_function_coverage=1 00:06:40.607 --rc genhtml_legend=1 00:06:40.607 --rc geninfo_all_blocks=1 00:06:40.607 --rc geninfo_unexecuted_blocks=1 00:06:40.607 00:06:40.607 ' 00:06:40.607 22:29:43 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.607 --rc genhtml_branch_coverage=1 00:06:40.607 --rc genhtml_function_coverage=1 00:06:40.607 --rc genhtml_legend=1 00:06:40.607 --rc geninfo_all_blocks=1 00:06:40.607 --rc geninfo_unexecuted_blocks=1 00:06:40.607 00:06:40.607 ' 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.607 22:29:43 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.607 22:29:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.607 22:29:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.607 22:29:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.607 22:29:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:40.607 22:29:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.607 22:29:43 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:40.607 INFO: launching applications... 00:06:40.607 22:29:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:40.607 22:29:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:40.607 22:29:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:40.607 22:29:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:40.607 22:29:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:40.607 22:29:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:40.607 22:29:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.607 22:29:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.607 22:29:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=104290 00:06:40.607 22:29:43 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:40.607 22:29:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:40.607 Waiting for target to run... 00:06:40.607 22:29:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 104290 /var/tmp/spdk_tgt.sock 00:06:40.607 22:29:43 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 104290 ']' 00:06:40.607 22:29:43 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:40.607 22:29:43 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.607 22:29:43 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:40.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:40.607 22:29:43 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.607 22:29:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:40.607 [2024-10-11 22:29:43.690058] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:40.607 [2024-10-11 22:29:43.690155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104290 ] 00:06:41.178 [2024-10-11 22:29:44.209784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.178 [2024-10-11 22:29:44.249795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.437 22:29:44 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.437 22:29:44 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:41.437 22:29:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:41.437 00:06:41.437 22:29:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:41.437 INFO: shutting down applications... 00:06:41.437 22:29:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:41.437 22:29:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:41.437 22:29:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:41.437 22:29:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 104290 ]] 00:06:41.437 22:29:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 104290 00:06:41.437 22:29:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:41.437 22:29:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:41.437 22:29:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 104290 00:06:41.437 22:29:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:42.004 22:29:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:42.004 22:29:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.004 22:29:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 104290 00:06:42.004 22:29:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:42.004 22:29:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:42.004 22:29:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:42.004 22:29:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:42.004 SPDK target shutdown done 00:06:42.004 22:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:42.004 Success 00:06:42.004 00:06:42.004 real 0m1.678s 00:06:42.004 user 0m1.473s 00:06:42.004 sys 0m0.621s 00:06:42.004 22:29:45 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.004 22:29:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:42.004 ************************************ 00:06:42.004 END TEST json_config_extra_key 00:06:42.004 ************************************ 00:06:42.004 22:29:45 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:42.004 22:29:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.004 22:29:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.004 22:29:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.004 ************************************ 00:06:42.004 START TEST alias_rpc 00:06:42.004 ************************************ 00:06:42.004 22:29:45 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:42.263 * Looking for test storage... 00:06:42.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:42.263 22:29:45 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:42.263 22:29:45 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:42.263 22:29:45 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:42.263 22:29:45 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.263 22:29:45 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:42.263 22:29:45 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.263 22:29:45 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:42.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.263 --rc genhtml_branch_coverage=1 00:06:42.263 --rc genhtml_function_coverage=1 00:06:42.263 --rc genhtml_legend=1 00:06:42.263 --rc geninfo_all_blocks=1 00:06:42.263 --rc geninfo_unexecuted_blocks=1 00:06:42.263 00:06:42.263 ' 00:06:42.263 22:29:45 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:42.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.263 --rc genhtml_branch_coverage=1 00:06:42.263 --rc genhtml_function_coverage=1 00:06:42.263 --rc genhtml_legend=1 00:06:42.263 --rc geninfo_all_blocks=1 00:06:42.263 --rc geninfo_unexecuted_blocks=1 00:06:42.263 00:06:42.263 ' 00:06:42.263 22:29:45 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:42.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.263 --rc genhtml_branch_coverage=1 00:06:42.263 --rc genhtml_function_coverage=1 00:06:42.263 --rc genhtml_legend=1 00:06:42.263 --rc geninfo_all_blocks=1 00:06:42.263 --rc geninfo_unexecuted_blocks=1 00:06:42.263 00:06:42.263 ' 00:06:42.263 22:29:45 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:42.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.263 --rc genhtml_branch_coverage=1 00:06:42.263 --rc genhtml_function_coverage=1 00:06:42.263 --rc genhtml_legend=1 00:06:42.263 --rc geninfo_all_blocks=1 00:06:42.263 --rc geninfo_unexecuted_blocks=1 00:06:42.263 00:06:42.263 ' 00:06:42.263 22:29:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.263 22:29:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=104495 00:06:42.263 22:29:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:42.263 22:29:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 104495 00:06:42.263 22:29:45 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 104495 ']' 00:06:42.263 22:29:45 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.263 22:29:45 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.264 22:29:45 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.264 22:29:45 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.264 22:29:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.264 [2024-10-11 22:29:45.430528] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:42.264 [2024-10-11 22:29:45.430640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104495 ] 00:06:42.264 [2024-10-11 22:29:45.491606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.522 [2024-10-11 22:29:45.538069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.522 22:29:45 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.522 22:29:45 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:42.522 22:29:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:43.090 22:29:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 104495 00:06:43.090 22:29:46 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 104495 ']' 00:06:43.090 22:29:46 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 104495 00:06:43.090 22:29:46 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:43.090 22:29:46 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.090 22:29:46 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104495 00:06:43.090 22:29:46 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.090 22:29:46 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.090 22:29:46 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104495' 00:06:43.090 killing process with pid 104495 00:06:43.090 22:29:46 alias_rpc -- common/autotest_common.sh@969 -- # kill 104495 00:06:43.090 22:29:46 alias_rpc -- common/autotest_common.sh@974 -- # wait 104495 00:06:43.349 00:06:43.349 real 0m1.271s 00:06:43.349 user 0m1.384s 00:06:43.349 sys 0m0.447s 00:06:43.349 22:29:46 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.349 22:29:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.349 ************************************ 00:06:43.349 END TEST alias_rpc 00:06:43.349 ************************************ 00:06:43.349 22:29:46 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:43.349 22:29:46 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:43.349 22:29:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.349 22:29:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.349 22:29:46 -- common/autotest_common.sh@10 -- # set +x 00:06:43.349 ************************************ 00:06:43.349 START TEST spdkcli_tcp 00:06:43.349 ************************************ 00:06:43.349 22:29:46 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:43.349 * Looking for test storage... 00:06:43.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:43.349 22:29:46 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:43.349 22:29:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:43.349 22:29:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.609 22:29:46 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:43.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.609 --rc genhtml_branch_coverage=1 00:06:43.609 --rc genhtml_function_coverage=1 00:06:43.609 --rc genhtml_legend=1 00:06:43.609 --rc geninfo_all_blocks=1 00:06:43.609 --rc geninfo_unexecuted_blocks=1 00:06:43.609 00:06:43.609 ' 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:43.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.609 --rc genhtml_branch_coverage=1 00:06:43.609 --rc genhtml_function_coverage=1 00:06:43.609 --rc genhtml_legend=1 00:06:43.609 --rc geninfo_all_blocks=1 00:06:43.609 --rc geninfo_unexecuted_blocks=1 00:06:43.609 00:06:43.609 ' 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:43.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.609 --rc genhtml_branch_coverage=1 00:06:43.609 --rc genhtml_function_coverage=1 00:06:43.609 --rc genhtml_legend=1 00:06:43.609 --rc geninfo_all_blocks=1 00:06:43.609 --rc geninfo_unexecuted_blocks=1 00:06:43.609 00:06:43.609 ' 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:43.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.609 --rc genhtml_branch_coverage=1 00:06:43.609 --rc genhtml_function_coverage=1 00:06:43.609 --rc genhtml_legend=1 00:06:43.609 --rc geninfo_all_blocks=1 00:06:43.609 --rc geninfo_unexecuted_blocks=1 00:06:43.609 00:06:43.609 ' 00:06:43.609 22:29:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:43.609 22:29:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:43.609 22:29:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:43.609 22:29:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:43.609 22:29:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:43.609 22:29:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:43.609 22:29:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.609 22:29:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=104797 00:06:43.609 22:29:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:43.609 22:29:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 104797 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 104797 ']' 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.609 22:29:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.609 [2024-10-11 22:29:46.752427] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:43.609 [2024-10-11 22:29:46.752561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104797 ] 00:06:43.609 [2024-10-11 22:29:46.813277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.609 [2024-10-11 22:29:46.860122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.609 [2024-10-11 22:29:46.860126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.868 22:29:47 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.868 22:29:47 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:43.868 22:29:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=104811 00:06:43.868 22:29:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:43.868 22:29:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:44.126 [ 00:06:44.126 "bdev_malloc_delete", 00:06:44.126 "bdev_malloc_create", 00:06:44.126 "bdev_null_resize", 00:06:44.126 "bdev_null_delete", 00:06:44.126 "bdev_null_create", 00:06:44.126 "bdev_nvme_cuse_unregister", 00:06:44.126 "bdev_nvme_cuse_register", 00:06:44.126 "bdev_opal_new_user", 00:06:44.126 "bdev_opal_set_lock_state", 00:06:44.126 "bdev_opal_delete", 00:06:44.126 "bdev_opal_get_info", 00:06:44.126 "bdev_opal_create", 00:06:44.126 "bdev_nvme_opal_revert", 00:06:44.126 "bdev_nvme_opal_init", 00:06:44.126 "bdev_nvme_send_cmd", 00:06:44.126 "bdev_nvme_set_keys", 00:06:44.126 "bdev_nvme_get_path_iostat", 00:06:44.126 "bdev_nvme_get_mdns_discovery_info", 00:06:44.126 "bdev_nvme_stop_mdns_discovery", 00:06:44.126 "bdev_nvme_start_mdns_discovery", 00:06:44.126 "bdev_nvme_set_multipath_policy", 00:06:44.126 "bdev_nvme_set_preferred_path", 00:06:44.126 "bdev_nvme_get_io_paths", 00:06:44.126 "bdev_nvme_remove_error_injection", 00:06:44.126 "bdev_nvme_add_error_injection", 00:06:44.126 "bdev_nvme_get_discovery_info", 00:06:44.126 "bdev_nvme_stop_discovery", 00:06:44.126 "bdev_nvme_start_discovery", 00:06:44.126 "bdev_nvme_get_controller_health_info", 00:06:44.126 "bdev_nvme_disable_controller", 00:06:44.126 "bdev_nvme_enable_controller", 00:06:44.126 "bdev_nvme_reset_controller", 00:06:44.126 "bdev_nvme_get_transport_statistics", 00:06:44.126 "bdev_nvme_apply_firmware", 00:06:44.126 "bdev_nvme_detach_controller", 00:06:44.126 "bdev_nvme_get_controllers", 00:06:44.126 "bdev_nvme_attach_controller", 00:06:44.126 "bdev_nvme_set_hotplug", 00:06:44.126 "bdev_nvme_set_options", 00:06:44.126 "bdev_passthru_delete", 00:06:44.126 "bdev_passthru_create", 00:06:44.126 "bdev_lvol_set_parent_bdev", 00:06:44.126 "bdev_lvol_set_parent", 00:06:44.126 "bdev_lvol_check_shallow_copy", 00:06:44.126 "bdev_lvol_start_shallow_copy", 00:06:44.126 "bdev_lvol_grow_lvstore", 00:06:44.126 "bdev_lvol_get_lvols", 00:06:44.126 "bdev_lvol_get_lvstores", 00:06:44.126 "bdev_lvol_delete", 00:06:44.126 "bdev_lvol_set_read_only", 00:06:44.126 "bdev_lvol_resize", 00:06:44.126 "bdev_lvol_decouple_parent", 00:06:44.126 "bdev_lvol_inflate", 00:06:44.127 "bdev_lvol_rename", 00:06:44.127 "bdev_lvol_clone_bdev", 00:06:44.127 "bdev_lvol_clone", 00:06:44.127 "bdev_lvol_snapshot", 00:06:44.127 "bdev_lvol_create", 00:06:44.127 "bdev_lvol_delete_lvstore", 00:06:44.127 "bdev_lvol_rename_lvstore", 00:06:44.127 "bdev_lvol_create_lvstore", 00:06:44.127 "bdev_raid_set_options", 00:06:44.127 "bdev_raid_remove_base_bdev", 00:06:44.127 "bdev_raid_add_base_bdev", 00:06:44.127 "bdev_raid_delete", 00:06:44.127 "bdev_raid_create", 00:06:44.127 "bdev_raid_get_bdevs", 00:06:44.127 "bdev_error_inject_error", 00:06:44.127 "bdev_error_delete", 00:06:44.127 "bdev_error_create", 00:06:44.127 "bdev_split_delete", 00:06:44.127 "bdev_split_create", 00:06:44.127 "bdev_delay_delete", 00:06:44.127 "bdev_delay_create", 00:06:44.127 "bdev_delay_update_latency", 00:06:44.127 "bdev_zone_block_delete", 00:06:44.127 "bdev_zone_block_create", 00:06:44.127 "blobfs_create", 00:06:44.127 "blobfs_detect", 00:06:44.127 "blobfs_set_cache_size", 00:06:44.127 "bdev_aio_delete", 00:06:44.127 "bdev_aio_rescan", 00:06:44.127 "bdev_aio_create", 00:06:44.127 "bdev_ftl_set_property", 00:06:44.127 "bdev_ftl_get_properties", 00:06:44.127 "bdev_ftl_get_stats", 00:06:44.127 "bdev_ftl_unmap", 00:06:44.127 "bdev_ftl_unload", 00:06:44.127 "bdev_ftl_delete", 00:06:44.127 "bdev_ftl_load", 00:06:44.127 "bdev_ftl_create", 00:06:44.127 "bdev_virtio_attach_controller", 00:06:44.127 "bdev_virtio_scsi_get_devices", 00:06:44.127 "bdev_virtio_detach_controller", 00:06:44.127 "bdev_virtio_blk_set_hotplug", 00:06:44.127 "bdev_iscsi_delete", 00:06:44.127 "bdev_iscsi_create", 00:06:44.127 "bdev_iscsi_set_options", 00:06:44.127 "accel_error_inject_error", 00:06:44.127 "ioat_scan_accel_module", 00:06:44.127 "dsa_scan_accel_module", 00:06:44.127 "iaa_scan_accel_module", 00:06:44.127 "vfu_virtio_create_fs_endpoint", 00:06:44.127 "vfu_virtio_create_scsi_endpoint", 00:06:44.127 "vfu_virtio_scsi_remove_target", 00:06:44.127 "vfu_virtio_scsi_add_target", 00:06:44.127 "vfu_virtio_create_blk_endpoint", 00:06:44.127 "vfu_virtio_delete_endpoint", 00:06:44.127 "keyring_file_remove_key", 00:06:44.127 "keyring_file_add_key", 00:06:44.127 "keyring_linux_set_options", 00:06:44.127 "fsdev_aio_delete", 00:06:44.127 "fsdev_aio_create", 00:06:44.127 "iscsi_get_histogram", 00:06:44.127 "iscsi_enable_histogram", 00:06:44.127 "iscsi_set_options", 00:06:44.127 "iscsi_get_auth_groups", 00:06:44.127 "iscsi_auth_group_remove_secret", 00:06:44.127 "iscsi_auth_group_add_secret", 00:06:44.127 "iscsi_delete_auth_group", 00:06:44.127 "iscsi_create_auth_group", 00:06:44.127 "iscsi_set_discovery_auth", 00:06:44.127 "iscsi_get_options", 00:06:44.127 "iscsi_target_node_request_logout", 00:06:44.127 "iscsi_target_node_set_redirect", 00:06:44.127 "iscsi_target_node_set_auth", 00:06:44.127 "iscsi_target_node_add_lun", 00:06:44.127 "iscsi_get_stats", 00:06:44.127 "iscsi_get_connections", 00:06:44.127 "iscsi_portal_group_set_auth", 00:06:44.127 "iscsi_start_portal_group", 00:06:44.127 "iscsi_delete_portal_group", 00:06:44.127 "iscsi_create_portal_group", 00:06:44.127 "iscsi_get_portal_groups", 00:06:44.127 "iscsi_delete_target_node", 00:06:44.127 "iscsi_target_node_remove_pg_ig_maps", 00:06:44.127 "iscsi_target_node_add_pg_ig_maps", 00:06:44.127 "iscsi_create_target_node", 00:06:44.127 "iscsi_get_target_nodes", 00:06:44.127 "iscsi_delete_initiator_group", 00:06:44.127 "iscsi_initiator_group_remove_initiators", 00:06:44.127 "iscsi_initiator_group_add_initiators", 00:06:44.127 "iscsi_create_initiator_group", 00:06:44.127 "iscsi_get_initiator_groups", 00:06:44.127 "nvmf_set_crdt", 00:06:44.127 "nvmf_set_config", 00:06:44.127 "nvmf_set_max_subsystems", 00:06:44.127 "nvmf_stop_mdns_prr", 00:06:44.127 "nvmf_publish_mdns_prr", 00:06:44.127 "nvmf_subsystem_get_listeners", 00:06:44.127 "nvmf_subsystem_get_qpairs", 00:06:44.127 "nvmf_subsystem_get_controllers", 00:06:44.127 "nvmf_get_stats", 00:06:44.127 "nvmf_get_transports", 00:06:44.127 "nvmf_create_transport", 00:06:44.127 "nvmf_get_targets", 00:06:44.127 "nvmf_delete_target", 00:06:44.127 "nvmf_create_target", 00:06:44.127 "nvmf_subsystem_allow_any_host", 00:06:44.127 "nvmf_subsystem_set_keys", 00:06:44.127 "nvmf_subsystem_remove_host", 00:06:44.127 "nvmf_subsystem_add_host", 00:06:44.127 "nvmf_ns_remove_host", 00:06:44.127 "nvmf_ns_add_host", 00:06:44.127 "nvmf_subsystem_remove_ns", 00:06:44.127 "nvmf_subsystem_set_ns_ana_group", 00:06:44.127 "nvmf_subsystem_add_ns", 00:06:44.127 "nvmf_subsystem_listener_set_ana_state", 00:06:44.127 "nvmf_discovery_get_referrals", 00:06:44.127 "nvmf_discovery_remove_referral", 00:06:44.127 "nvmf_discovery_add_referral", 00:06:44.127 "nvmf_subsystem_remove_listener", 00:06:44.127 "nvmf_subsystem_add_listener", 00:06:44.127 "nvmf_delete_subsystem", 00:06:44.127 "nvmf_create_subsystem", 00:06:44.127 "nvmf_get_subsystems", 00:06:44.127 "env_dpdk_get_mem_stats", 00:06:44.127 "nbd_get_disks", 00:06:44.127 "nbd_stop_disk", 00:06:44.127 "nbd_start_disk", 00:06:44.127 "ublk_recover_disk", 00:06:44.127 "ublk_get_disks", 00:06:44.127 "ublk_stop_disk", 00:06:44.127 "ublk_start_disk", 00:06:44.127 "ublk_destroy_target", 00:06:44.127 "ublk_create_target", 00:06:44.127 "virtio_blk_create_transport", 00:06:44.127 "virtio_blk_get_transports", 00:06:44.127 "vhost_controller_set_coalescing", 00:06:44.127 "vhost_get_controllers", 00:06:44.127 "vhost_delete_controller", 00:06:44.127 "vhost_create_blk_controller", 00:06:44.127 "vhost_scsi_controller_remove_target", 00:06:44.127 "vhost_scsi_controller_add_target", 00:06:44.127 "vhost_start_scsi_controller", 00:06:44.127 "vhost_create_scsi_controller", 00:06:44.127 "thread_set_cpumask", 00:06:44.127 "scheduler_set_options", 00:06:44.127 "framework_get_governor", 00:06:44.127 "framework_get_scheduler", 00:06:44.127 "framework_set_scheduler", 00:06:44.127 "framework_get_reactors", 00:06:44.127 "thread_get_io_channels", 00:06:44.127 "thread_get_pollers", 00:06:44.127 "thread_get_stats", 00:06:44.127 "framework_monitor_context_switch", 00:06:44.127 "spdk_kill_instance", 00:06:44.127 "log_enable_timestamps", 00:06:44.127 "log_get_flags", 00:06:44.127 "log_clear_flag", 00:06:44.127 "log_set_flag", 00:06:44.127 "log_get_level", 00:06:44.127 "log_set_level", 00:06:44.127 "log_get_print_level", 00:06:44.127 "log_set_print_level", 00:06:44.127 "framework_enable_cpumask_locks", 00:06:44.127 "framework_disable_cpumask_locks", 00:06:44.127 "framework_wait_init", 00:06:44.127 "framework_start_init", 00:06:44.127 "scsi_get_devices", 00:06:44.127 "bdev_get_histogram", 00:06:44.127 "bdev_enable_histogram", 00:06:44.127 "bdev_set_qos_limit", 00:06:44.127 "bdev_set_qd_sampling_period", 00:06:44.127 "bdev_get_bdevs", 00:06:44.127 "bdev_reset_iostat", 00:06:44.127 "bdev_get_iostat", 00:06:44.127 "bdev_examine", 00:06:44.127 "bdev_wait_for_examine", 00:06:44.127 "bdev_set_options", 00:06:44.127 "accel_get_stats", 00:06:44.127 "accel_set_options", 00:06:44.127 "accel_set_driver", 00:06:44.127 "accel_crypto_key_destroy", 00:06:44.127 "accel_crypto_keys_get", 00:06:44.127 "accel_crypto_key_create", 00:06:44.127 "accel_assign_opc", 00:06:44.127 "accel_get_module_info", 00:06:44.127 "accel_get_opc_assignments", 00:06:44.127 "vmd_rescan", 00:06:44.127 "vmd_remove_device", 00:06:44.127 "vmd_enable", 00:06:44.127 "sock_get_default_impl", 00:06:44.127 "sock_set_default_impl", 00:06:44.127 "sock_impl_set_options", 00:06:44.127 "sock_impl_get_options", 00:06:44.127 "iobuf_get_stats", 00:06:44.127 "iobuf_set_options", 00:06:44.127 "keyring_get_keys", 00:06:44.127 "vfu_tgt_set_base_path", 00:06:44.127 "framework_get_pci_devices", 00:06:44.127 "framework_get_config", 00:06:44.127 "framework_get_subsystems", 00:06:44.127 "fsdev_set_opts", 00:06:44.127 "fsdev_get_opts", 00:06:44.127 "trace_get_info", 00:06:44.127 "trace_get_tpoint_group_mask", 00:06:44.127 "trace_disable_tpoint_group", 00:06:44.127 "trace_enable_tpoint_group", 00:06:44.127 "trace_clear_tpoint_mask", 00:06:44.127 "trace_set_tpoint_mask", 00:06:44.127 "notify_get_notifications", 00:06:44.127 "notify_get_types", 00:06:44.127 "spdk_get_version", 00:06:44.127 "rpc_get_methods" 00:06:44.127 ] 00:06:44.127 22:29:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:44.127 22:29:47 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:44.127 22:29:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.386 22:29:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:44.386 22:29:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 104797 00:06:44.386 22:29:47 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 104797 ']' 00:06:44.386 22:29:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 104797 00:06:44.386 22:29:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:44.386 22:29:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.386 22:29:47 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104797 00:06:44.386 22:29:47 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.386 22:29:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.386 22:29:47 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104797' 00:06:44.386 killing process with pid 104797 00:06:44.386 22:29:47 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 104797 00:06:44.386 22:29:47 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 104797 00:06:44.646 00:06:44.646 real 0m1.266s 00:06:44.646 user 0m2.273s 00:06:44.646 sys 0m0.480s 00:06:44.646 22:29:47 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.646 22:29:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.646 ************************************ 00:06:44.646 END TEST spdkcli_tcp 00:06:44.646 ************************************ 00:06:44.646 22:29:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:44.646 22:29:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.646 22:29:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.646 22:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:44.646 ************************************ 00:06:44.646 START TEST dpdk_mem_utility 00:06:44.646 ************************************ 00:06:44.646 22:29:47 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:44.646 * Looking for test storage... 00:06:44.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:44.906 22:29:47 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:44.906 22:29:47 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:44.906 22:29:47 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:44.906 22:29:47 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.906 22:29:47 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:44.906 22:29:47 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.906 22:29:47 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:44.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.906 --rc genhtml_branch_coverage=1 00:06:44.906 --rc genhtml_function_coverage=1 00:06:44.906 --rc genhtml_legend=1 00:06:44.906 --rc geninfo_all_blocks=1 00:06:44.906 --rc geninfo_unexecuted_blocks=1 00:06:44.906 00:06:44.906 ' 00:06:44.906 22:29:47 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:44.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.906 --rc genhtml_branch_coverage=1 00:06:44.906 --rc genhtml_function_coverage=1 00:06:44.906 --rc genhtml_legend=1 00:06:44.906 --rc geninfo_all_blocks=1 00:06:44.906 --rc geninfo_unexecuted_blocks=1 00:06:44.906 00:06:44.906 ' 00:06:44.906 22:29:47 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:44.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.906 --rc genhtml_branch_coverage=1 00:06:44.906 --rc genhtml_function_coverage=1 00:06:44.906 --rc genhtml_legend=1 00:06:44.906 --rc geninfo_all_blocks=1 00:06:44.906 --rc geninfo_unexecuted_blocks=1 00:06:44.906 00:06:44.906 ' 00:06:44.906 22:29:47 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:44.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.906 --rc genhtml_branch_coverage=1 00:06:44.906 --rc genhtml_function_coverage=1 00:06:44.906 --rc genhtml_legend=1 00:06:44.906 --rc geninfo_all_blocks=1 00:06:44.906 --rc geninfo_unexecuted_blocks=1 00:06:44.906 00:06:44.906 ' 00:06:44.906 22:29:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:44.906 22:29:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=105011 00:06:44.906 22:29:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:44.906 22:29:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 105011 00:06:44.906 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 105011 ']' 00:06:44.906 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.906 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.906 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.906 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.906 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:44.906 [2024-10-11 22:29:48.053347] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:44.906 [2024-10-11 22:29:48.053436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105011 ] 00:06:44.906 [2024-10-11 22:29:48.114379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.906 [2024-10-11 22:29:48.161614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.165 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.165 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:45.165 22:29:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:45.165 22:29:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:45.165 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.165 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.165 { 00:06:45.165 "filename": "/tmp/spdk_mem_dump.txt" 00:06:45.165 } 00:06:45.165 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.165 22:29:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:45.425 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:45.425 1 heaps totaling size 810.000000 MiB 00:06:45.425 size: 810.000000 MiB heap id: 0 00:06:45.425 end heaps---------- 00:06:45.425 9 mempools totaling size 595.772034 MiB 00:06:45.425 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:45.425 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:45.425 size: 92.545471 MiB name: bdev_io_105011 00:06:45.425 size: 50.003479 MiB name: msgpool_105011 00:06:45.425 size: 36.509338 MiB name: fsdev_io_105011 00:06:45.425 size: 21.763794 MiB name: PDU_Pool 00:06:45.425 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:45.425 size: 4.133484 MiB name: evtpool_105011 00:06:45.425 size: 0.026123 MiB name: Session_Pool 00:06:45.425 end mempools------- 00:06:45.425 6 memzones totaling size 4.142822 MiB 00:06:45.425 size: 1.000366 MiB name: RG_ring_0_105011 00:06:45.425 size: 1.000366 MiB name: RG_ring_1_105011 00:06:45.425 size: 1.000366 MiB name: RG_ring_4_105011 00:06:45.425 size: 1.000366 MiB name: RG_ring_5_105011 00:06:45.425 size: 0.125366 MiB name: RG_ring_2_105011 00:06:45.425 size: 0.015991 MiB name: RG_ring_3_105011 00:06:45.425 end memzones------- 00:06:45.425 22:29:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:45.425 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:45.425 list of free elements. size: 10.862488 MiB 00:06:45.425 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:45.425 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:45.425 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:45.425 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:45.425 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:45.425 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:45.425 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:45.425 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:45.425 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:45.425 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:45.425 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:45.425 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:45.425 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:45.425 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:45.425 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:45.425 list of standard malloc elements. size: 199.218628 MiB 00:06:45.425 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:45.425 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:45.425 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:45.425 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:45.425 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:45.425 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:45.425 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:45.425 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:45.425 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:45.425 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:45.425 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:45.425 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:45.425 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:45.425 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:45.425 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:45.425 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:45.425 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:45.425 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:45.425 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:45.425 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:45.425 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:45.425 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:45.425 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:45.425 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:45.425 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:45.425 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:45.425 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:45.425 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:45.425 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:45.425 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:45.425 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:45.425 list of memzone associated elements. size: 599.918884 MiB 00:06:45.425 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:45.425 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:45.425 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:45.425 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:45.425 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:45.425 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_105011_0 00:06:45.425 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:45.425 associated memzone info: size: 48.002930 MiB name: MP_msgpool_105011_0 00:06:45.425 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:45.425 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_105011_0 00:06:45.425 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:45.425 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:45.425 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:45.425 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:45.425 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:45.425 associated memzone info: size: 3.000122 MiB name: MP_evtpool_105011_0 00:06:45.425 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:45.426 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_105011 00:06:45.426 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:45.426 associated memzone info: size: 1.007996 MiB name: MP_evtpool_105011 00:06:45.426 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:45.426 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:45.426 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:45.426 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:45.426 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:45.426 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:45.426 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:45.426 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:45.426 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:45.426 associated memzone info: size: 1.000366 MiB name: RG_ring_0_105011 00:06:45.426 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:45.426 associated memzone info: size: 1.000366 MiB name: RG_ring_1_105011 00:06:45.426 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:45.426 associated memzone info: size: 1.000366 MiB name: RG_ring_4_105011 00:06:45.426 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:45.426 associated memzone info: size: 1.000366 MiB name: RG_ring_5_105011 00:06:45.426 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:45.426 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_105011 00:06:45.426 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:45.426 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_105011 00:06:45.426 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:45.426 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:45.426 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:45.426 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:45.426 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:45.426 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:45.426 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:45.426 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_105011 00:06:45.426 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:45.426 associated memzone info: size: 0.125366 MiB name: RG_ring_2_105011 00:06:45.426 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:45.426 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:45.426 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:45.426 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:45.426 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:45.426 associated memzone info: size: 0.015991 MiB name: RG_ring_3_105011 00:06:45.426 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:45.426 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:45.426 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:45.426 associated memzone info: size: 0.000183 MiB name: MP_msgpool_105011 00:06:45.426 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:45.426 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_105011 00:06:45.426 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:45.426 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_105011 00:06:45.426 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:45.426 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:45.426 22:29:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:45.426 22:29:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 105011 00:06:45.426 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 105011 ']' 00:06:45.426 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 105011 00:06:45.426 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:45.426 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.426 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105011 00:06:45.426 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.426 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.426 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105011' 00:06:45.426 killing process with pid 105011 00:06:45.426 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 105011 00:06:45.426 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 105011 00:06:45.685 00:06:45.685 real 0m1.081s 00:06:45.685 user 0m1.070s 00:06:45.685 sys 0m0.415s 00:06:45.685 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.685 22:29:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.685 ************************************ 00:06:45.685 END TEST dpdk_mem_utility 00:06:45.685 ************************************ 00:06:45.944 22:29:48 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:45.944 22:29:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.944 22:29:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.944 22:29:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.944 ************************************ 00:06:45.944 START TEST event 00:06:45.944 ************************************ 00:06:45.944 22:29:48 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:45.944 * Looking for test storage... 00:06:45.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:45.944 22:29:49 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.944 22:29:49 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.944 22:29:49 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.944 22:29:49 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.944 22:29:49 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.944 22:29:49 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.944 22:29:49 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.944 22:29:49 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.944 22:29:49 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.944 22:29:49 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.944 22:29:49 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.944 22:29:49 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.944 22:29:49 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.944 22:29:49 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.944 22:29:49 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.944 22:29:49 event -- scripts/common.sh@344 -- # case "$op" in 00:06:45.944 22:29:49 event -- scripts/common.sh@345 -- # : 1 00:06:45.944 22:29:49 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.944 22:29:49 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.944 22:29:49 event -- scripts/common.sh@365 -- # decimal 1 00:06:45.944 22:29:49 event -- scripts/common.sh@353 -- # local d=1 00:06:45.944 22:29:49 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.944 22:29:49 event -- scripts/common.sh@355 -- # echo 1 00:06:45.944 22:29:49 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.944 22:29:49 event -- scripts/common.sh@366 -- # decimal 2 00:06:45.944 22:29:49 event -- scripts/common.sh@353 -- # local d=2 00:06:45.944 22:29:49 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.944 22:29:49 event -- scripts/common.sh@355 -- # echo 2 00:06:45.944 22:29:49 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.944 22:29:49 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.944 22:29:49 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.944 22:29:49 event -- scripts/common.sh@368 -- # return 0 00:06:45.944 22:29:49 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.944 22:29:49 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.944 --rc genhtml_branch_coverage=1 00:06:45.944 --rc genhtml_function_coverage=1 00:06:45.944 --rc genhtml_legend=1 00:06:45.944 --rc geninfo_all_blocks=1 00:06:45.944 --rc geninfo_unexecuted_blocks=1 00:06:45.944 00:06:45.944 ' 00:06:45.944 22:29:49 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.944 --rc genhtml_branch_coverage=1 00:06:45.944 --rc genhtml_function_coverage=1 00:06:45.944 --rc genhtml_legend=1 00:06:45.944 --rc geninfo_all_blocks=1 00:06:45.944 --rc geninfo_unexecuted_blocks=1 00:06:45.944 00:06:45.944 ' 00:06:45.944 22:29:49 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.944 --rc genhtml_branch_coverage=1 00:06:45.944 --rc genhtml_function_coverage=1 00:06:45.944 --rc genhtml_legend=1 00:06:45.944 --rc geninfo_all_blocks=1 00:06:45.944 --rc geninfo_unexecuted_blocks=1 00:06:45.944 00:06:45.944 ' 00:06:45.944 22:29:49 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.944 --rc genhtml_branch_coverage=1 00:06:45.944 --rc genhtml_function_coverage=1 00:06:45.944 --rc genhtml_legend=1 00:06:45.944 --rc geninfo_all_blocks=1 00:06:45.944 --rc geninfo_unexecuted_blocks=1 00:06:45.944 00:06:45.944 ' 00:06:45.944 22:29:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:45.944 22:29:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:45.944 22:29:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:45.944 22:29:49 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:45.944 22:29:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.944 22:29:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.944 ************************************ 00:06:45.944 START TEST event_perf 00:06:45.944 ************************************ 00:06:45.944 22:29:49 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:45.944 Running I/O for 1 seconds...[2024-10-11 22:29:49.170073] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:45.944 [2024-10-11 22:29:49.170134] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105210 ] 00:06:46.203 [2024-10-11 22:29:49.233831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.203 [2024-10-11 22:29:49.282620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.203 [2024-10-11 22:29:49.282683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.203 [2024-10-11 22:29:49.282747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.203 [2024-10-11 22:29:49.282750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.138 Running I/O for 1 seconds... 00:06:47.138 lcore 0: 232928 00:06:47.138 lcore 1: 232926 00:06:47.138 lcore 2: 232927 00:06:47.138 lcore 3: 232927 00:06:47.138 done. 00:06:47.138 00:06:47.138 real 0m1.170s 00:06:47.138 user 0m4.091s 00:06:47.138 sys 0m0.071s 00:06:47.138 22:29:50 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.138 22:29:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.138 ************************************ 00:06:47.138 END TEST event_perf 00:06:47.138 ************************************ 00:06:47.138 22:29:50 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:47.138 22:29:50 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:47.138 22:29:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.138 22:29:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.138 ************************************ 00:06:47.138 START TEST event_reactor 00:06:47.138 ************************************ 00:06:47.138 22:29:50 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:47.138 [2024-10-11 22:29:50.387928] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:47.138 [2024-10-11 22:29:50.387994] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105366 ] 00:06:47.397 [2024-10-11 22:29:50.447448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.397 [2024-10-11 22:29:50.493166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.332 test_start 00:06:48.332 oneshot 00:06:48.332 tick 100 00:06:48.332 tick 100 00:06:48.332 tick 250 00:06:48.332 tick 100 00:06:48.332 tick 100 00:06:48.332 tick 100 00:06:48.332 tick 250 00:06:48.332 tick 500 00:06:48.332 tick 100 00:06:48.332 tick 100 00:06:48.332 tick 250 00:06:48.332 tick 100 00:06:48.332 tick 100 00:06:48.332 test_end 00:06:48.332 00:06:48.332 real 0m1.161s 00:06:48.332 user 0m1.088s 00:06:48.332 sys 0m0.068s 00:06:48.332 22:29:51 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.332 22:29:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:48.332 ************************************ 00:06:48.332 END TEST event_reactor 00:06:48.332 ************************************ 00:06:48.332 22:29:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:48.332 22:29:51 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:48.332 22:29:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.332 22:29:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.332 ************************************ 00:06:48.332 START TEST event_reactor_perf 00:06:48.332 ************************************ 00:06:48.332 22:29:51 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:48.332 [2024-10-11 22:29:51.600502] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:48.332 [2024-10-11 22:29:51.600592] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105527 ] 00:06:48.591 [2024-10-11 22:29:51.657366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.591 [2024-10-11 22:29:51.702160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.526 test_start 00:06:49.526 test_end 00:06:49.526 Performance: 445575 events per second 00:06:49.526 00:06:49.526 real 0m1.159s 00:06:49.526 user 0m1.093s 00:06:49.526 sys 0m0.062s 00:06:49.526 22:29:52 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.526 22:29:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.526 ************************************ 00:06:49.526 END TEST event_reactor_perf 00:06:49.526 ************************************ 00:06:49.526 22:29:52 event -- event/event.sh@49 -- # uname -s 00:06:49.526 22:29:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:49.526 22:29:52 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:49.526 22:29:52 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.526 22:29:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.526 22:29:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.785 ************************************ 00:06:49.785 START TEST event_scheduler 00:06:49.785 ************************************ 00:06:49.785 22:29:52 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:49.785 * Looking for test storage... 00:06:49.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:49.785 22:29:52 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:49.785 22:29:52 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:49.785 22:29:52 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:49.785 22:29:52 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.785 22:29:52 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:49.785 22:29:52 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.785 22:29:52 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:49.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.785 --rc genhtml_branch_coverage=1 00:06:49.785 --rc genhtml_function_coverage=1 00:06:49.785 --rc genhtml_legend=1 00:06:49.785 --rc geninfo_all_blocks=1 00:06:49.785 --rc geninfo_unexecuted_blocks=1 00:06:49.785 00:06:49.785 ' 00:06:49.785 22:29:52 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:49.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.785 --rc genhtml_branch_coverage=1 00:06:49.785 --rc genhtml_function_coverage=1 00:06:49.785 --rc genhtml_legend=1 00:06:49.785 --rc geninfo_all_blocks=1 00:06:49.785 --rc geninfo_unexecuted_blocks=1 00:06:49.785 00:06:49.785 ' 00:06:49.785 22:29:52 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:49.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.785 --rc genhtml_branch_coverage=1 00:06:49.785 --rc genhtml_function_coverage=1 00:06:49.785 --rc genhtml_legend=1 00:06:49.785 --rc geninfo_all_blocks=1 00:06:49.785 --rc geninfo_unexecuted_blocks=1 00:06:49.785 00:06:49.785 ' 00:06:49.785 22:29:52 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:49.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.785 --rc genhtml_branch_coverage=1 00:06:49.785 --rc genhtml_function_coverage=1 00:06:49.785 --rc genhtml_legend=1 00:06:49.785 --rc geninfo_all_blocks=1 00:06:49.785 --rc geninfo_unexecuted_blocks=1 00:06:49.785 00:06:49.785 ' 00:06:49.785 22:29:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:49.785 22:29:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=105714 00:06:49.785 22:29:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:49.785 22:29:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.786 22:29:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 105714 00:06:49.786 22:29:52 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 105714 ']' 00:06:49.786 22:29:52 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.786 22:29:52 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.786 22:29:52 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.786 22:29:52 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.786 22:29:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:49.786 [2024-10-11 22:29:52.989427] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:49.786 [2024-10-11 22:29:52.989522] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105714 ] 00:06:49.786 [2024-10-11 22:29:53.048242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.045 [2024-10-11 22:29:53.098340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.045 [2024-10-11 22:29:53.098446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.045 [2024-10-11 22:29:53.098541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.045 [2024-10-11 22:29:53.098545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.045 22:29:53 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.045 22:29:53 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:50.045 22:29:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:50.045 22:29:53 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.045 22:29:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.045 [2024-10-11 22:29:53.219530] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:50.045 [2024-10-11 22:29:53.219580] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:50.045 [2024-10-11 22:29:53.219598] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:50.045 [2024-10-11 22:29:53.219610] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:50.045 [2024-10-11 22:29:53.219620] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:50.045 22:29:53 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.045 22:29:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:50.045 22:29:53 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.045 22:29:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 [2024-10-11 22:29:53.318748] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:50.304 22:29:53 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:50.304 22:29:53 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.304 22:29:53 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 ************************************ 00:06:50.304 START TEST scheduler_create_thread 00:06:50.304 ************************************ 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 2 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 3 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 4 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 5 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 6 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 7 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 8 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 9 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 10 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.304 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.871 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.871 00:06:50.871 real 0m0.591s 00:06:50.871 user 0m0.008s 00:06:50.871 sys 0m0.005s 00:06:50.871 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.871 22:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.871 ************************************ 00:06:50.871 END TEST scheduler_create_thread 00:06:50.871 ************************************ 00:06:50.871 22:29:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:50.871 22:29:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 105714 00:06:50.871 22:29:53 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 105714 ']' 00:06:50.871 22:29:53 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 105714 00:06:50.871 22:29:53 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:50.871 22:29:53 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.871 22:29:53 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105714 00:06:50.871 22:29:53 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:50.871 22:29:53 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:50.871 22:29:53 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105714' 00:06:50.871 killing process with pid 105714 00:06:50.871 22:29:53 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 105714 00:06:50.871 22:29:53 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 105714 00:06:51.438 [2024-10-11 22:29:54.414778] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:51.438 00:06:51.438 real 0m1.795s 00:06:51.438 user 0m2.473s 00:06:51.438 sys 0m0.347s 00:06:51.438 22:29:54 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.438 22:29:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.438 ************************************ 00:06:51.438 END TEST event_scheduler 00:06:51.438 ************************************ 00:06:51.438 22:29:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:51.438 22:29:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:51.438 22:29:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.438 22:29:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.438 22:29:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.438 ************************************ 00:06:51.438 START TEST app_repeat 00:06:51.438 ************************************ 00:06:51.438 22:29:54 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=106024 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 106024' 00:06:51.438 Process app_repeat pid: 106024 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:51.438 spdk_app_start Round 0 00:06:51.438 22:29:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 106024 /var/tmp/spdk-nbd.sock 00:06:51.438 22:29:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 106024 ']' 00:06:51.438 22:29:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.438 22:29:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.438 22:29:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.438 22:29:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.438 22:29:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.438 [2024-10-11 22:29:54.668068] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:06:51.438 [2024-10-11 22:29:54.668127] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106024 ] 00:06:51.697 [2024-10-11 22:29:54.726232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.697 [2024-10-11 22:29:54.778572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.697 [2024-10-11 22:29:54.778582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.697 22:29:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.697 22:29:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:51.697 22:29:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.956 Malloc0 00:06:51.956 22:29:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.523 Malloc1 00:06:52.523 22:29:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.523 22:29:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.781 /dev/nbd0 00:06:52.781 22:29:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.781 22:29:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.781 1+0 records in 00:06:52.781 1+0 records out 00:06:52.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230423 s, 17.8 MB/s 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.781 22:29:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.781 22:29:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.781 22:29:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.781 22:29:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:53.039 /dev/nbd1 00:06:53.039 22:29:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:53.039 22:29:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.039 1+0 records in 00:06:53.039 1+0 records out 00:06:53.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210198 s, 19.5 MB/s 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:53.039 22:29:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:53.039 22:29:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.039 22:29:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.039 22:29:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.039 22:29:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.039 22:29:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.298 { 00:06:53.298 "nbd_device": "/dev/nbd0", 00:06:53.298 "bdev_name": "Malloc0" 00:06:53.298 }, 00:06:53.298 { 00:06:53.298 "nbd_device": "/dev/nbd1", 00:06:53.298 "bdev_name": "Malloc1" 00:06:53.298 } 00:06:53.298 ]' 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.298 { 00:06:53.298 "nbd_device": "/dev/nbd0", 00:06:53.298 "bdev_name": "Malloc0" 00:06:53.298 }, 00:06:53.298 { 00:06:53.298 "nbd_device": "/dev/nbd1", 00:06:53.298 "bdev_name": "Malloc1" 00:06:53.298 } 00:06:53.298 ]' 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.298 /dev/nbd1' 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.298 /dev/nbd1' 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:53.298 256+0 records in 00:06:53.298 256+0 records out 00:06:53.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00393576 s, 266 MB/s 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.298 256+0 records in 00:06:53.298 256+0 records out 00:06:53.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203553 s, 51.5 MB/s 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.298 256+0 records in 00:06:53.298 256+0 records out 00:06:53.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220209 s, 47.6 MB/s 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.298 22:29:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.865 22:29:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.865 22:29:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.865 22:29:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.865 22:29:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.865 22:29:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.865 22:29:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.865 22:29:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.865 22:29:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.865 22:29:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.865 22:29:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.123 22:29:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.123 22:29:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.123 22:29:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.123 22:29:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.123 22:29:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.123 22:29:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.123 22:29:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.123 22:29:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.123 22:29:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.123 22:29:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.123 22:29:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.382 22:29:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.382 22:29:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.382 22:29:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.382 22:29:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.382 22:29:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.382 22:29:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.382 22:29:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:54.382 22:29:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.382 22:29:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.382 22:29:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:54.382 22:29:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:54.382 22:29:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:54.382 22:29:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.640 22:29:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:54.899 [2024-10-11 22:29:57.937381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.899 [2024-10-11 22:29:57.981007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.899 [2024-10-11 22:29:57.981007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.899 [2024-10-11 22:29:58.038506] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.899 [2024-10-11 22:29:58.038597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.195 22:30:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:58.195 22:30:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:58.195 spdk_app_start Round 1 00:06:58.195 22:30:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 106024 /var/tmp/spdk-nbd.sock 00:06:58.195 22:30:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 106024 ']' 00:06:58.195 22:30:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.195 22:30:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.195 22:30:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.195 22:30:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.195 22:30:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.195 22:30:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.195 22:30:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:58.195 22:30:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.195 Malloc0 00:06:58.195 22:30:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.454 Malloc1 00:06:58.454 22:30:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.454 22:30:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.712 /dev/nbd0 00:06:58.712 22:30:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.712 22:30:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.712 22:30:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:58.712 22:30:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:58.712 22:30:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:58.712 22:30:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:58.712 22:30:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:58.712 22:30:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:58.712 22:30:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:58.712 22:30:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:58.712 22:30:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.970 1+0 records in 00:06:58.970 1+0 records out 00:06:58.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195672 s, 20.9 MB/s 00:06:58.971 22:30:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.971 22:30:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:58.971 22:30:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.971 22:30:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:58.971 22:30:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:58.971 22:30:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.971 22:30:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.971 22:30:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.229 /dev/nbd1 00:06:59.229 22:30:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.229 22:30:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.229 1+0 records in 00:06:59.229 1+0 records out 00:06:59.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021511 s, 19.0 MB/s 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.229 22:30:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:59.229 22:30:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.229 22:30:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.229 22:30:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.229 22:30:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.229 22:30:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.488 { 00:06:59.488 "nbd_device": "/dev/nbd0", 00:06:59.488 "bdev_name": "Malloc0" 00:06:59.488 }, 00:06:59.488 { 00:06:59.488 "nbd_device": "/dev/nbd1", 00:06:59.488 "bdev_name": "Malloc1" 00:06:59.488 } 00:06:59.488 ]' 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.488 { 00:06:59.488 "nbd_device": "/dev/nbd0", 00:06:59.488 "bdev_name": "Malloc0" 00:06:59.488 }, 00:06:59.488 { 00:06:59.488 "nbd_device": "/dev/nbd1", 00:06:59.488 "bdev_name": "Malloc1" 00:06:59.488 } 00:06:59.488 ]' 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.488 /dev/nbd1' 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.488 /dev/nbd1' 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.488 256+0 records in 00:06:59.488 256+0 records out 00:06:59.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509579 s, 206 MB/s 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.488 256+0 records in 00:06:59.488 256+0 records out 00:06:59.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201445 s, 52.1 MB/s 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.488 256+0 records in 00:06:59.488 256+0 records out 00:06:59.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229007 s, 45.8 MB/s 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.488 22:30:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.746 22:30:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.746 22:30:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.746 22:30:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.746 22:30:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.746 22:30:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.746 22:30:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.746 22:30:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.746 22:30:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.746 22:30:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.746 22:30:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.314 22:30:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.314 22:30:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.314 22:30:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.314 22:30:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.314 22:30:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.314 22:30:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.314 22:30:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.314 22:30:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.314 22:30:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.314 22:30:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.314 22:30:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.572 22:30:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.572 22:30:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.572 22:30:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.572 22:30:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.572 22:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.572 22:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.572 22:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.572 22:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.572 22:30:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.572 22:30:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.572 22:30:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.572 22:30:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.572 22:30:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.831 22:30:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.089 [2024-10-11 22:30:04.134967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.089 [2024-10-11 22:30:04.177308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.089 [2024-10-11 22:30:04.177309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.089 [2024-10-11 22:30:04.235270] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.089 [2024-10-11 22:30:04.235333] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.372 22:30:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:04.372 22:30:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:04.372 spdk_app_start Round 2 00:07:04.372 22:30:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 106024 /var/tmp/spdk-nbd.sock 00:07:04.372 22:30:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 106024 ']' 00:07:04.372 22:30:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.372 22:30:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.372 22:30:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.372 22:30:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.372 22:30:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.372 22:30:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.372 22:30:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:04.372 22:30:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.372 Malloc0 00:07:04.372 22:30:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.631 Malloc1 00:07:04.631 22:30:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.631 22:30:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.889 /dev/nbd0 00:07:05.148 22:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.148 22:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.148 1+0 records in 00:07:05.148 1+0 records out 00:07:05.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218468 s, 18.7 MB/s 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.148 22:30:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:05.148 22:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.148 22:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.148 22:30:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.406 /dev/nbd1 00:07:05.406 22:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.406 22:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.406 1+0 records in 00:07:05.406 1+0 records out 00:07:05.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163266 s, 25.1 MB/s 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.406 22:30:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:05.406 22:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.406 22:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.406 22:30:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.406 22:30:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.406 22:30:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.665 { 00:07:05.665 "nbd_device": "/dev/nbd0", 00:07:05.665 "bdev_name": "Malloc0" 00:07:05.665 }, 00:07:05.665 { 00:07:05.665 "nbd_device": "/dev/nbd1", 00:07:05.665 "bdev_name": "Malloc1" 00:07:05.665 } 00:07:05.665 ]' 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.665 { 00:07:05.665 "nbd_device": "/dev/nbd0", 00:07:05.665 "bdev_name": "Malloc0" 00:07:05.665 }, 00:07:05.665 { 00:07:05.665 "nbd_device": "/dev/nbd1", 00:07:05.665 "bdev_name": "Malloc1" 00:07:05.665 } 00:07:05.665 ]' 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.665 /dev/nbd1' 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.665 /dev/nbd1' 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.665 256+0 records in 00:07:05.665 256+0 records out 00:07:05.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530479 s, 198 MB/s 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.665 256+0 records in 00:07:05.665 256+0 records out 00:07:05.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201768 s, 52.0 MB/s 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.665 256+0 records in 00:07:05.665 256+0 records out 00:07:05.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022788 s, 46.0 MB/s 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.665 22:30:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.239 22:30:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.497 22:30:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.497 22:30:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.497 22:30:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.497 22:30:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.497 22:30:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.755 22:30:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.755 22:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.755 22:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.755 22:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.755 22:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.755 22:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.755 22:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.755 22:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.755 22:30:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.755 22:30:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.755 22:30:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.755 22:30:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.756 22:30:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.014 22:30:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:07.273 [2024-10-11 22:30:10.293588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.273 [2024-10-11 22:30:10.338557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.273 [2024-10-11 22:30:10.338560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.273 [2024-10-11 22:30:10.398832] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.273 [2024-10-11 22:30:10.398928] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.556 22:30:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 106024 /var/tmp/spdk-nbd.sock 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 106024 ']' 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:10.556 22:30:13 event.app_repeat -- event/event.sh@39 -- # killprocess 106024 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 106024 ']' 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 106024 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106024 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106024' 00:07:10.556 killing process with pid 106024 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@969 -- # kill 106024 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@974 -- # wait 106024 00:07:10.556 spdk_app_start is called in Round 0. 00:07:10.556 Shutdown signal received, stop current app iteration 00:07:10.556 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 reinitialization... 00:07:10.556 spdk_app_start is called in Round 1. 00:07:10.556 Shutdown signal received, stop current app iteration 00:07:10.556 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 reinitialization... 00:07:10.556 spdk_app_start is called in Round 2. 00:07:10.556 Shutdown signal received, stop current app iteration 00:07:10.556 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 reinitialization... 00:07:10.556 spdk_app_start is called in Round 3. 00:07:10.556 Shutdown signal received, stop current app iteration 00:07:10.556 22:30:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:10.556 22:30:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:10.556 00:07:10.556 real 0m18.933s 00:07:10.556 user 0m41.931s 00:07:10.556 sys 0m3.372s 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.556 22:30:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.556 ************************************ 00:07:10.556 END TEST app_repeat 00:07:10.556 ************************************ 00:07:10.557 22:30:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:10.557 22:30:13 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:10.557 22:30:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.557 22:30:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.557 22:30:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.557 ************************************ 00:07:10.557 START TEST cpu_locks 00:07:10.557 ************************************ 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:10.557 * Looking for test storage... 00:07:10.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.557 22:30:13 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:10.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.557 --rc genhtml_branch_coverage=1 00:07:10.557 --rc genhtml_function_coverage=1 00:07:10.557 --rc genhtml_legend=1 00:07:10.557 --rc geninfo_all_blocks=1 00:07:10.557 --rc geninfo_unexecuted_blocks=1 00:07:10.557 00:07:10.557 ' 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:10.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.557 --rc genhtml_branch_coverage=1 00:07:10.557 --rc genhtml_function_coverage=1 00:07:10.557 --rc genhtml_legend=1 00:07:10.557 --rc geninfo_all_blocks=1 00:07:10.557 --rc geninfo_unexecuted_blocks=1 00:07:10.557 00:07:10.557 ' 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:10.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.557 --rc genhtml_branch_coverage=1 00:07:10.557 --rc genhtml_function_coverage=1 00:07:10.557 --rc genhtml_legend=1 00:07:10.557 --rc geninfo_all_blocks=1 00:07:10.557 --rc geninfo_unexecuted_blocks=1 00:07:10.557 00:07:10.557 ' 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:10.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.557 --rc genhtml_branch_coverage=1 00:07:10.557 --rc genhtml_function_coverage=1 00:07:10.557 --rc genhtml_legend=1 00:07:10.557 --rc geninfo_all_blocks=1 00:07:10.557 --rc geninfo_unexecuted_blocks=1 00:07:10.557 00:07:10.557 ' 00:07:10.557 22:30:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:10.557 22:30:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:10.557 22:30:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:10.557 22:30:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.557 22:30:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.557 ************************************ 00:07:10.557 START TEST default_locks 00:07:10.557 ************************************ 00:07:10.557 22:30:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:10.557 22:30:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=109153 00:07:10.557 22:30:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.557 22:30:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 109153 00:07:10.557 22:30:13 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 109153 ']' 00:07:10.557 22:30:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.557 22:30:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.557 22:30:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.557 22:30:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.557 22:30:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.817 [2024-10-11 22:30:13.857268] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:10.817 [2024-10-11 22:30:13.857347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109153 ] 00:07:10.817 [2024-10-11 22:30:13.917061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.817 [2024-10-11 22:30:13.960995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.076 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.076 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:11.076 22:30:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 109153 00:07:11.076 22:30:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 109153 00:07:11.076 22:30:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.334 lslocks: write error 00:07:11.334 22:30:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 109153 00:07:11.334 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 109153 ']' 00:07:11.334 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 109153 00:07:11.334 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:11.334 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.334 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109153 00:07:11.334 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.334 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.334 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109153' 00:07:11.334 killing process with pid 109153 00:07:11.334 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 109153 00:07:11.334 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 109153 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 109153 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 109153 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 109153 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 109153 ']' 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (109153) - No such process 00:07:11.594 ERROR: process (pid: 109153) is no longer running 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:11.594 00:07:11.594 real 0m1.053s 00:07:11.594 user 0m1.015s 00:07:11.594 sys 0m0.481s 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.594 22:30:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.594 ************************************ 00:07:11.594 END TEST default_locks 00:07:11.594 ************************************ 00:07:11.853 22:30:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:11.853 22:30:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.853 22:30:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.853 22:30:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.853 ************************************ 00:07:11.853 START TEST default_locks_via_rpc 00:07:11.853 ************************************ 00:07:11.853 22:30:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:11.853 22:30:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=109318 00:07:11.853 22:30:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.853 22:30:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 109318 00:07:11.853 22:30:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 109318 ']' 00:07:11.853 22:30:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.854 22:30:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.854 22:30:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.854 22:30:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.854 22:30:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.854 [2024-10-11 22:30:14.963010] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:11.854 [2024-10-11 22:30:14.963106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109318 ] 00:07:11.854 [2024-10-11 22:30:15.019971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.854 [2024-10-11 22:30:15.062037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 109318 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 109318 00:07:12.113 22:30:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.371 22:30:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 109318 00:07:12.371 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 109318 ']' 00:07:12.371 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 109318 00:07:12.371 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:12.371 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.371 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109318 00:07:12.371 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.371 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.371 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109318' 00:07:12.371 killing process with pid 109318 00:07:12.371 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 109318 00:07:12.371 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 109318 00:07:12.937 00:07:12.937 real 0m1.067s 00:07:12.937 user 0m1.049s 00:07:12.937 sys 0m0.474s 00:07:12.937 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.937 22:30:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.937 ************************************ 00:07:12.937 END TEST default_locks_via_rpc 00:07:12.937 ************************************ 00:07:12.937 22:30:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:12.937 22:30:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.937 22:30:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.937 22:30:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.937 ************************************ 00:07:12.937 START TEST non_locking_app_on_locked_coremask 00:07:12.937 ************************************ 00:07:12.937 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:12.937 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=109478 00:07:12.937 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.937 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 109478 /var/tmp/spdk.sock 00:07:12.937 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 109478 ']' 00:07:12.938 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.938 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.938 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.938 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.938 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.938 [2024-10-11 22:30:16.082876] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:12.938 [2024-10-11 22:30:16.082969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109478 ] 00:07:12.938 [2024-10-11 22:30:16.140800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.938 [2024-10-11 22:30:16.190944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.196 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.196 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:13.196 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=109483 00:07:13.196 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:13.196 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 109483 /var/tmp/spdk2.sock 00:07:13.196 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 109483 ']' 00:07:13.196 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.196 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.196 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.196 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.196 22:30:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.455 [2024-10-11 22:30:16.505053] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:13.455 [2024-10-11 22:30:16.505138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109483 ] 00:07:13.455 [2024-10-11 22:30:16.587168] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.455 [2024-10-11 22:30:16.587193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.455 [2024-10-11 22:30:16.675277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.022 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.022 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:14.022 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 109478 00:07:14.022 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 109478 00:07:14.022 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.588 lslocks: write error 00:07:14.588 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 109478 00:07:14.588 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 109478 ']' 00:07:14.588 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 109478 00:07:14.588 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:14.588 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.588 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109478 00:07:14.588 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.588 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.588 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109478' 00:07:14.588 killing process with pid 109478 00:07:14.588 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 109478 00:07:14.588 22:30:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 109478 00:07:15.156 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 109483 00:07:15.156 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 109483 ']' 00:07:15.156 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 109483 00:07:15.156 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:15.156 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.156 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109483 00:07:15.156 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.156 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.156 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109483' 00:07:15.156 killing process with pid 109483 00:07:15.156 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 109483 00:07:15.156 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 109483 00:07:15.723 00:07:15.723 real 0m2.718s 00:07:15.723 user 0m2.765s 00:07:15.723 sys 0m0.938s 00:07:15.723 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.723 22:30:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.723 ************************************ 00:07:15.723 END TEST non_locking_app_on_locked_coremask 00:07:15.723 ************************************ 00:07:15.723 22:30:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:15.723 22:30:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.723 22:30:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.723 22:30:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.723 ************************************ 00:07:15.723 START TEST locking_app_on_unlocked_coremask 00:07:15.723 ************************************ 00:07:15.723 22:30:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:15.723 22:30:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=109782 00:07:15.723 22:30:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:15.723 22:30:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 109782 /var/tmp/spdk.sock 00:07:15.723 22:30:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 109782 ']' 00:07:15.723 22:30:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.723 22:30:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.723 22:30:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.723 22:30:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.723 22:30:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.723 [2024-10-11 22:30:18.857809] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:15.723 [2024-10-11 22:30:18.857903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109782 ] 00:07:15.723 [2024-10-11 22:30:18.915350] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.723 [2024-10-11 22:30:18.915387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.723 [2024-10-11 22:30:18.960656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.981 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.981 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:15.981 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=109908 00:07:15.981 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:15.982 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 109908 /var/tmp/spdk2.sock 00:07:15.982 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 109908 ']' 00:07:15.982 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.982 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.982 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.982 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.982 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.241 [2024-10-11 22:30:19.262995] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:16.241 [2024-10-11 22:30:19.263079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109908 ] 00:07:16.241 [2024-10-11 22:30:19.344754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.241 [2024-10-11 22:30:19.432913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.809 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.809 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:16.809 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 109908 00:07:16.809 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 109908 00:07:16.809 22:30:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.067 lslocks: write error 00:07:17.067 22:30:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 109782 00:07:17.067 22:30:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 109782 ']' 00:07:17.067 22:30:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 109782 00:07:17.067 22:30:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:17.067 22:30:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.067 22:30:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109782 00:07:17.325 22:30:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.325 22:30:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.325 22:30:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109782' 00:07:17.325 killing process with pid 109782 00:07:17.325 22:30:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 109782 00:07:17.325 22:30:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 109782 00:07:17.892 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 109908 00:07:17.892 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 109908 ']' 00:07:17.892 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 109908 00:07:17.892 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:17.892 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.892 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109908 00:07:17.892 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.892 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.892 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109908' 00:07:17.892 killing process with pid 109908 00:07:17.892 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 109908 00:07:17.892 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 109908 00:07:18.458 00:07:18.458 real 0m2.712s 00:07:18.458 user 0m2.738s 00:07:18.458 sys 0m0.969s 00:07:18.458 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.458 22:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.458 ************************************ 00:07:18.458 END TEST locking_app_on_unlocked_coremask 00:07:18.458 ************************************ 00:07:18.458 22:30:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:18.458 22:30:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.458 22:30:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.458 22:30:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.458 ************************************ 00:07:18.458 START TEST locking_app_on_locked_coremask 00:07:18.458 ************************************ 00:07:18.458 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:18.458 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=110212 00:07:18.458 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.458 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 110212 /var/tmp/spdk.sock 00:07:18.458 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 110212 ']' 00:07:18.458 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.459 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.459 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.459 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.459 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.459 [2024-10-11 22:30:21.618577] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:18.459 [2024-10-11 22:30:21.618682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110212 ] 00:07:18.459 [2024-10-11 22:30:21.677784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.717 [2024-10-11 22:30:21.727180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.717 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.717 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:18.717 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=110222 00:07:18.717 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:18.717 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 110222 /var/tmp/spdk2.sock 00:07:18.717 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:18.976 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 110222 /var/tmp/spdk2.sock 00:07:18.976 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:18.976 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.976 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:18.976 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.976 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 110222 /var/tmp/spdk2.sock 00:07:18.976 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 110222 ']' 00:07:18.976 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.976 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.976 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.976 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.976 22:30:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.976 [2024-10-11 22:30:22.041528] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:18.976 [2024-10-11 22:30:22.041625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110222 ] 00:07:18.976 [2024-10-11 22:30:22.127266] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 110212 has claimed it. 00:07:18.976 [2024-10-11 22:30:22.127335] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (110222) - No such process 00:07:19.542 ERROR: process (pid: 110222) is no longer running 00:07:19.542 22:30:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.542 22:30:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:19.542 22:30:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:19.542 22:30:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.542 22:30:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.542 22:30:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.542 22:30:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 110212 00:07:19.542 22:30:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 110212 00:07:19.542 22:30:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.800 lslocks: write error 00:07:19.800 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 110212 00:07:19.800 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 110212 ']' 00:07:19.800 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 110212 00:07:19.800 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:19.800 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.800 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110212 00:07:19.800 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.800 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.800 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110212' 00:07:19.800 killing process with pid 110212 00:07:19.800 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 110212 00:07:19.800 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 110212 00:07:20.366 00:07:20.366 real 0m1.856s 00:07:20.366 user 0m2.061s 00:07:20.366 sys 0m0.606s 00:07:20.366 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.366 22:30:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.366 ************************************ 00:07:20.366 END TEST locking_app_on_locked_coremask 00:07:20.366 ************************************ 00:07:20.366 22:30:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:20.366 22:30:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.366 22:30:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.366 22:30:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.366 ************************************ 00:07:20.366 START TEST locking_overlapped_coremask 00:07:20.367 ************************************ 00:07:20.367 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:20.367 22:30:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=110389 00:07:20.367 22:30:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:20.367 22:30:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 110389 /var/tmp/spdk.sock 00:07:20.367 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 110389 ']' 00:07:20.367 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.367 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.367 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.367 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.367 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.367 [2024-10-11 22:30:23.524418] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:20.367 [2024-10-11 22:30:23.524511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110389 ] 00:07:20.367 [2024-10-11 22:30:23.584267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.624 [2024-10-11 22:30:23.637533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.624 [2024-10-11 22:30:23.637600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.625 [2024-10-11 22:30:23.637605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.882 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.882 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:20.882 22:30:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=110521 00:07:20.882 22:30:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:20.882 22:30:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 110521 /var/tmp/spdk2.sock 00:07:20.882 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:20.882 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 110521 /var/tmp/spdk2.sock 00:07:20.882 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:20.882 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.882 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:20.883 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.883 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 110521 /var/tmp/spdk2.sock 00:07:20.883 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 110521 ']' 00:07:20.883 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.883 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.883 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.883 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.883 22:30:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.883 [2024-10-11 22:30:23.949789] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:20.883 [2024-10-11 22:30:23.949918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110521 ] 00:07:20.883 [2024-10-11 22:30:24.040391] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 110389 has claimed it. 00:07:20.883 [2024-10-11 22:30:24.040453] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:21.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (110521) - No such process 00:07:21.450 ERROR: process (pid: 110521) is no longer running 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 110389 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 110389 ']' 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 110389 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110389 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110389' 00:07:21.450 killing process with pid 110389 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 110389 00:07:21.450 22:30:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 110389 00:07:22.016 00:07:22.016 real 0m1.595s 00:07:22.016 user 0m4.514s 00:07:22.016 sys 0m0.465s 00:07:22.016 22:30:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.016 22:30:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.016 ************************************ 00:07:22.016 END TEST locking_overlapped_coremask 00:07:22.016 ************************************ 00:07:22.016 22:30:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:22.016 22:30:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.016 22:30:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.016 22:30:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.016 ************************************ 00:07:22.016 START TEST locking_overlapped_coremask_via_rpc 00:07:22.016 ************************************ 00:07:22.016 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:22.016 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=110684 00:07:22.016 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:22.016 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 110684 /var/tmp/spdk.sock 00:07:22.016 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 110684 ']' 00:07:22.016 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.016 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.016 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.016 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.016 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.016 [2024-10-11 22:30:25.168229] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:22.016 [2024-10-11 22:30:25.168323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110684 ] 00:07:22.017 [2024-10-11 22:30:25.225822] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.017 [2024-10-11 22:30:25.225866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.017 [2024-10-11 22:30:25.270588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.017 [2024-10-11 22:30:25.270652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.017 [2024-10-11 22:30:25.270655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.275 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.275 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:22.275 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=110690 00:07:22.275 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:22.275 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 110690 /var/tmp/spdk2.sock 00:07:22.275 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 110690 ']' 00:07:22.275 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.275 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.275 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.275 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.275 22:30:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.534 [2024-10-11 22:30:25.592716] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:22.534 [2024-10-11 22:30:25.592812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110690 ] 00:07:22.534 [2024-10-11 22:30:25.682698] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.534 [2024-10-11 22:30:25.682739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.534 [2024-10-11 22:30:25.778971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.534 [2024-10-11 22:30:25.779036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:22.534 [2024-10-11 22:30:25.779038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.101 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.101 [2024-10-11 22:30:26.304674] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 110684 has claimed it. 00:07:23.101 request: 00:07:23.101 { 00:07:23.101 "method": "framework_enable_cpumask_locks", 00:07:23.101 "req_id": 1 00:07:23.101 } 00:07:23.102 Got JSON-RPC error response 00:07:23.102 response: 00:07:23.102 { 00:07:23.102 "code": -32603, 00:07:23.102 "message": "Failed to claim CPU core: 2" 00:07:23.102 } 00:07:23.102 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:23.102 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:23.102 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.102 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:23.102 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.102 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 110684 /var/tmp/spdk.sock 00:07:23.102 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 110684 ']' 00:07:23.102 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.102 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.102 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.102 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.102 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.360 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.360 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:23.360 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 110690 /var/tmp/spdk2.sock 00:07:23.360 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 110690 ']' 00:07:23.360 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.360 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.360 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.360 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.360 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.617 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.617 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:23.617 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:23.617 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:23.617 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:23.617 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:23.617 00:07:23.617 real 0m1.748s 00:07:23.617 user 0m0.923s 00:07:23.617 sys 0m0.128s 00:07:23.617 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.617 22:30:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.617 ************************************ 00:07:23.617 END TEST locking_overlapped_coremask_via_rpc 00:07:23.617 ************************************ 00:07:23.617 22:30:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:23.617 22:30:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 110684 ]] 00:07:23.617 22:30:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 110684 00:07:23.617 22:30:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 110684 ']' 00:07:23.617 22:30:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 110684 00:07:23.617 22:30:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:23.617 22:30:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.875 22:30:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110684 00:07:23.875 22:30:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.875 22:30:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.875 22:30:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110684' 00:07:23.875 killing process with pid 110684 00:07:23.875 22:30:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 110684 00:07:23.875 22:30:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 110684 00:07:24.133 22:30:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 110690 ]] 00:07:24.134 22:30:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 110690 00:07:24.134 22:30:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 110690 ']' 00:07:24.134 22:30:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 110690 00:07:24.134 22:30:27 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:24.134 22:30:27 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.134 22:30:27 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110690 00:07:24.134 22:30:27 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:24.134 22:30:27 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:24.134 22:30:27 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110690' 00:07:24.134 killing process with pid 110690 00:07:24.134 22:30:27 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 110690 00:07:24.134 22:30:27 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 110690 00:07:24.707 22:30:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:24.707 22:30:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:24.707 22:30:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 110684 ]] 00:07:24.707 22:30:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 110684 00:07:24.707 22:30:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 110684 ']' 00:07:24.707 22:30:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 110684 00:07:24.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (110684) - No such process 00:07:24.707 22:30:27 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 110684 is not found' 00:07:24.707 Process with pid 110684 is not found 00:07:24.707 22:30:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 110690 ]] 00:07:24.707 22:30:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 110690 00:07:24.707 22:30:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 110690 ']' 00:07:24.707 22:30:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 110690 00:07:24.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (110690) - No such process 00:07:24.707 22:30:27 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 110690 is not found' 00:07:24.707 Process with pid 110690 is not found 00:07:24.707 22:30:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:24.707 00:07:24.707 real 0m14.113s 00:07:24.707 user 0m25.176s 00:07:24.707 sys 0m4.982s 00:07:24.707 22:30:27 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.707 22:30:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.707 ************************************ 00:07:24.707 END TEST cpu_locks 00:07:24.707 ************************************ 00:07:24.707 00:07:24.707 real 0m38.773s 00:07:24.707 user 1m16.065s 00:07:24.707 sys 0m9.159s 00:07:24.707 22:30:27 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.707 22:30:27 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.707 ************************************ 00:07:24.707 END TEST event 00:07:24.707 ************************************ 00:07:24.707 22:30:27 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:24.707 22:30:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.707 22:30:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.707 22:30:27 -- common/autotest_common.sh@10 -- # set +x 00:07:24.707 ************************************ 00:07:24.707 START TEST thread 00:07:24.707 ************************************ 00:07:24.707 22:30:27 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:24.707 * Looking for test storage... 00:07:24.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:24.707 22:30:27 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:24.707 22:30:27 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:24.707 22:30:27 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:24.707 22:30:27 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:24.707 22:30:27 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.707 22:30:27 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.707 22:30:27 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.708 22:30:27 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.708 22:30:27 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.708 22:30:27 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.708 22:30:27 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.708 22:30:27 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.708 22:30:27 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.708 22:30:27 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.708 22:30:27 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.708 22:30:27 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:24.708 22:30:27 thread -- scripts/common.sh@345 -- # : 1 00:07:24.708 22:30:27 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.708 22:30:27 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.708 22:30:27 thread -- scripts/common.sh@365 -- # decimal 1 00:07:24.708 22:30:27 thread -- scripts/common.sh@353 -- # local d=1 00:07:24.708 22:30:27 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.708 22:30:27 thread -- scripts/common.sh@355 -- # echo 1 00:07:24.708 22:30:27 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.708 22:30:27 thread -- scripts/common.sh@366 -- # decimal 2 00:07:24.708 22:30:27 thread -- scripts/common.sh@353 -- # local d=2 00:07:24.708 22:30:27 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.708 22:30:27 thread -- scripts/common.sh@355 -- # echo 2 00:07:24.708 22:30:27 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.708 22:30:27 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.708 22:30:27 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.708 22:30:27 thread -- scripts/common.sh@368 -- # return 0 00:07:24.708 22:30:27 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.708 22:30:27 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:24.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.708 --rc genhtml_branch_coverage=1 00:07:24.708 --rc genhtml_function_coverage=1 00:07:24.708 --rc genhtml_legend=1 00:07:24.708 --rc geninfo_all_blocks=1 00:07:24.708 --rc geninfo_unexecuted_blocks=1 00:07:24.708 00:07:24.708 ' 00:07:24.708 22:30:27 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:24.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.708 --rc genhtml_branch_coverage=1 00:07:24.708 --rc genhtml_function_coverage=1 00:07:24.708 --rc genhtml_legend=1 00:07:24.708 --rc geninfo_all_blocks=1 00:07:24.708 --rc geninfo_unexecuted_blocks=1 00:07:24.708 00:07:24.708 ' 00:07:24.708 22:30:27 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:24.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.708 --rc genhtml_branch_coverage=1 00:07:24.708 --rc genhtml_function_coverage=1 00:07:24.708 --rc genhtml_legend=1 00:07:24.708 --rc geninfo_all_blocks=1 00:07:24.708 --rc geninfo_unexecuted_blocks=1 00:07:24.708 00:07:24.708 ' 00:07:24.708 22:30:27 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:24.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.708 --rc genhtml_branch_coverage=1 00:07:24.708 --rc genhtml_function_coverage=1 00:07:24.708 --rc genhtml_legend=1 00:07:24.708 --rc geninfo_all_blocks=1 00:07:24.708 --rc geninfo_unexecuted_blocks=1 00:07:24.708 00:07:24.708 ' 00:07:24.708 22:30:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:24.708 22:30:27 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:24.708 22:30:27 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.708 22:30:27 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.708 ************************************ 00:07:24.708 START TEST thread_poller_perf 00:07:24.708 ************************************ 00:07:24.967 22:30:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:24.967 [2024-10-11 22:30:27.989041] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:24.967 [2024-10-11 22:30:27.989107] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111069 ] 00:07:24.967 [2024-10-11 22:30:28.047857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.967 [2024-10-11 22:30:28.095253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.967 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:25.903 [2024-10-11T20:30:29.171Z] ====================================== 00:07:25.903 [2024-10-11T20:30:29.171Z] busy:2709205344 (cyc) 00:07:25.903 [2024-10-11T20:30:29.171Z] total_run_count: 368000 00:07:25.903 [2024-10-11T20:30:29.171Z] tsc_hz: 2700000000 (cyc) 00:07:25.903 [2024-10-11T20:30:29.171Z] ====================================== 00:07:25.903 [2024-10-11T20:30:29.171Z] poller_cost: 7361 (cyc), 2726 (nsec) 00:07:25.903 00:07:25.903 real 0m1.171s 00:07:25.903 user 0m1.104s 00:07:25.903 sys 0m0.062s 00:07:25.903 22:30:29 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.904 22:30:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.904 ************************************ 00:07:25.904 END TEST thread_poller_perf 00:07:25.904 ************************************ 00:07:25.904 22:30:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.904 22:30:29 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:25.904 22:30:29 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.904 22:30:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.163 ************************************ 00:07:26.163 START TEST thread_poller_perf 00:07:26.163 ************************************ 00:07:26.163 22:30:29 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:26.163 [2024-10-11 22:30:29.212778] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:26.163 [2024-10-11 22:30:29.212841] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111221 ] 00:07:26.163 [2024-10-11 22:30:29.271884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.163 [2024-10-11 22:30:29.315480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.163 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:27.100 [2024-10-11T20:30:30.368Z] ====================================== 00:07:27.100 [2024-10-11T20:30:30.368Z] busy:2702674443 (cyc) 00:07:27.100 [2024-10-11T20:30:30.368Z] total_run_count: 4679000 00:07:27.100 [2024-10-11T20:30:30.368Z] tsc_hz: 2700000000 (cyc) 00:07:27.100 [2024-10-11T20:30:30.368Z] ====================================== 00:07:27.100 [2024-10-11T20:30:30.368Z] poller_cost: 577 (cyc), 213 (nsec) 00:07:27.100 00:07:27.100 real 0m1.164s 00:07:27.100 user 0m1.091s 00:07:27.100 sys 0m0.067s 00:07:27.100 22:30:30 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.100 22:30:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.100 ************************************ 00:07:27.100 END TEST thread_poller_perf 00:07:27.100 ************************************ 00:07:27.360 22:30:30 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:27.360 00:07:27.360 real 0m2.577s 00:07:27.360 user 0m2.323s 00:07:27.360 sys 0m0.258s 00:07:27.360 22:30:30 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.360 22:30:30 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.360 ************************************ 00:07:27.360 END TEST thread 00:07:27.360 ************************************ 00:07:27.360 22:30:30 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:27.360 22:30:30 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.360 22:30:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.360 22:30:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.360 22:30:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.360 ************************************ 00:07:27.360 START TEST app_cmdline 00:07:27.360 ************************************ 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.360 * Looking for test storage... 00:07:27.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.360 22:30:30 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.360 --rc genhtml_branch_coverage=1 00:07:27.360 --rc genhtml_function_coverage=1 00:07:27.360 --rc genhtml_legend=1 00:07:27.360 --rc geninfo_all_blocks=1 00:07:27.360 --rc geninfo_unexecuted_blocks=1 00:07:27.360 00:07:27.360 ' 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.360 --rc genhtml_branch_coverage=1 00:07:27.360 --rc genhtml_function_coverage=1 00:07:27.360 --rc genhtml_legend=1 00:07:27.360 --rc geninfo_all_blocks=1 00:07:27.360 --rc geninfo_unexecuted_blocks=1 00:07:27.360 00:07:27.360 ' 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.360 --rc genhtml_branch_coverage=1 00:07:27.360 --rc genhtml_function_coverage=1 00:07:27.360 --rc genhtml_legend=1 00:07:27.360 --rc geninfo_all_blocks=1 00:07:27.360 --rc geninfo_unexecuted_blocks=1 00:07:27.360 00:07:27.360 ' 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.360 --rc genhtml_branch_coverage=1 00:07:27.360 --rc genhtml_function_coverage=1 00:07:27.360 --rc genhtml_legend=1 00:07:27.360 --rc geninfo_all_blocks=1 00:07:27.360 --rc geninfo_unexecuted_blocks=1 00:07:27.360 00:07:27.360 ' 00:07:27.360 22:30:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:27.360 22:30:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=111537 00:07:27.360 22:30:30 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:27.360 22:30:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 111537 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 111537 ']' 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.360 22:30:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.620 [2024-10-11 22:30:30.642669] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:27.620 [2024-10-11 22:30:30.642750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111537 ] 00:07:27.620 [2024-10-11 22:30:30.701090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.620 [2024-10-11 22:30:30.748094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.879 22:30:31 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.879 22:30:31 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:27.879 22:30:31 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:28.137 { 00:07:28.137 "version": "SPDK v25.01-pre git sha1 bbce7a874", 00:07:28.137 "fields": { 00:07:28.137 "major": 25, 00:07:28.137 "minor": 1, 00:07:28.137 "patch": 0, 00:07:28.137 "suffix": "-pre", 00:07:28.137 "commit": "bbce7a874" 00:07:28.137 } 00:07:28.137 } 00:07:28.137 22:30:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:28.138 22:30:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:28.138 22:30:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:28.138 22:30:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:28.138 22:30:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.138 22:30:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:28.138 22:30:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.138 22:30:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:28.138 22:30:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:28.138 22:30:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:28.138 22:30:31 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.396 request: 00:07:28.396 { 00:07:28.396 "method": "env_dpdk_get_mem_stats", 00:07:28.396 "req_id": 1 00:07:28.396 } 00:07:28.396 Got JSON-RPC error response 00:07:28.396 response: 00:07:28.396 { 00:07:28.396 "code": -32601, 00:07:28.396 "message": "Method not found" 00:07:28.396 } 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.396 22:30:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 111537 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 111537 ']' 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 111537 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 111537 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 111537' 00:07:28.396 killing process with pid 111537 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@969 -- # kill 111537 00:07:28.396 22:30:31 app_cmdline -- common/autotest_common.sh@974 -- # wait 111537 00:07:28.964 00:07:28.964 real 0m1.552s 00:07:28.964 user 0m1.932s 00:07:28.964 sys 0m0.488s 00:07:28.964 22:30:31 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.964 22:30:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.964 ************************************ 00:07:28.964 END TEST app_cmdline 00:07:28.964 ************************************ 00:07:28.964 22:30:32 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:28.964 22:30:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.964 22:30:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.964 22:30:32 -- common/autotest_common.sh@10 -- # set +x 00:07:28.964 ************************************ 00:07:28.964 START TEST version 00:07:28.964 ************************************ 00:07:28.964 22:30:32 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:28.964 * Looking for test storage... 00:07:28.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:28.964 22:30:32 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:28.964 22:30:32 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:28.964 22:30:32 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:28.964 22:30:32 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:28.964 22:30:32 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.964 22:30:32 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.964 22:30:32 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.964 22:30:32 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.964 22:30:32 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.964 22:30:32 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.964 22:30:32 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.964 22:30:32 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.964 22:30:32 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.964 22:30:32 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.964 22:30:32 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.964 22:30:32 version -- scripts/common.sh@344 -- # case "$op" in 00:07:28.964 22:30:32 version -- scripts/common.sh@345 -- # : 1 00:07:28.964 22:30:32 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.964 22:30:32 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.964 22:30:32 version -- scripts/common.sh@365 -- # decimal 1 00:07:28.964 22:30:32 version -- scripts/common.sh@353 -- # local d=1 00:07:28.964 22:30:32 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.964 22:30:32 version -- scripts/common.sh@355 -- # echo 1 00:07:28.964 22:30:32 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.964 22:30:32 version -- scripts/common.sh@366 -- # decimal 2 00:07:28.964 22:30:32 version -- scripts/common.sh@353 -- # local d=2 00:07:28.964 22:30:32 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.964 22:30:32 version -- scripts/common.sh@355 -- # echo 2 00:07:28.964 22:30:32 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.964 22:30:32 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.964 22:30:32 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.964 22:30:32 version -- scripts/common.sh@368 -- # return 0 00:07:28.964 22:30:32 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.964 22:30:32 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:28.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.964 --rc genhtml_branch_coverage=1 00:07:28.964 --rc genhtml_function_coverage=1 00:07:28.964 --rc genhtml_legend=1 00:07:28.964 --rc geninfo_all_blocks=1 00:07:28.964 --rc geninfo_unexecuted_blocks=1 00:07:28.964 00:07:28.964 ' 00:07:28.964 22:30:32 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:28.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.964 --rc genhtml_branch_coverage=1 00:07:28.964 --rc genhtml_function_coverage=1 00:07:28.964 --rc genhtml_legend=1 00:07:28.964 --rc geninfo_all_blocks=1 00:07:28.964 --rc geninfo_unexecuted_blocks=1 00:07:28.964 00:07:28.964 ' 00:07:28.964 22:30:32 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:28.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.964 --rc genhtml_branch_coverage=1 00:07:28.964 --rc genhtml_function_coverage=1 00:07:28.964 --rc genhtml_legend=1 00:07:28.964 --rc geninfo_all_blocks=1 00:07:28.964 --rc geninfo_unexecuted_blocks=1 00:07:28.964 00:07:28.964 ' 00:07:28.964 22:30:32 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:28.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.964 --rc genhtml_branch_coverage=1 00:07:28.964 --rc genhtml_function_coverage=1 00:07:28.964 --rc genhtml_legend=1 00:07:28.964 --rc geninfo_all_blocks=1 00:07:28.964 --rc geninfo_unexecuted_blocks=1 00:07:28.964 00:07:28.964 ' 00:07:28.964 22:30:32 version -- app/version.sh@17 -- # get_header_version major 00:07:28.964 22:30:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:28.964 22:30:32 version -- app/version.sh@14 -- # cut -f2 00:07:28.964 22:30:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.964 22:30:32 version -- app/version.sh@17 -- # major=25 00:07:28.964 22:30:32 version -- app/version.sh@18 -- # get_header_version minor 00:07:28.964 22:30:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:28.964 22:30:32 version -- app/version.sh@14 -- # cut -f2 00:07:28.964 22:30:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.964 22:30:32 version -- app/version.sh@18 -- # minor=1 00:07:28.964 22:30:32 version -- app/version.sh@19 -- # get_header_version patch 00:07:28.964 22:30:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:28.964 22:30:32 version -- app/version.sh@14 -- # cut -f2 00:07:28.964 22:30:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.964 22:30:32 version -- app/version.sh@19 -- # patch=0 00:07:28.964 22:30:32 version -- app/version.sh@20 -- # get_header_version suffix 00:07:28.964 22:30:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:28.964 22:30:32 version -- app/version.sh@14 -- # cut -f2 00:07:28.964 22:30:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.964 22:30:32 version -- app/version.sh@20 -- # suffix=-pre 00:07:28.964 22:30:32 version -- app/version.sh@22 -- # version=25.1 00:07:28.964 22:30:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:28.964 22:30:32 version -- app/version.sh@28 -- # version=25.1rc0 00:07:28.964 22:30:32 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:28.964 22:30:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:29.223 22:30:32 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:29.223 22:30:32 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:29.223 00:07:29.223 real 0m0.200s 00:07:29.224 user 0m0.126s 00:07:29.224 sys 0m0.098s 00:07:29.224 22:30:32 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.224 22:30:32 version -- common/autotest_common.sh@10 -- # set +x 00:07:29.224 ************************************ 00:07:29.224 END TEST version 00:07:29.224 ************************************ 00:07:29.224 22:30:32 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:29.224 22:30:32 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:29.224 22:30:32 -- spdk/autotest.sh@194 -- # uname -s 00:07:29.224 22:30:32 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:29.224 22:30:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:29.224 22:30:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:29.224 22:30:32 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:29.224 22:30:32 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:29.224 22:30:32 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:29.224 22:30:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:29.224 22:30:32 -- common/autotest_common.sh@10 -- # set +x 00:07:29.224 22:30:32 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:29.224 22:30:32 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:29.224 22:30:32 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:29.224 22:30:32 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:29.224 22:30:32 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:29.224 22:30:32 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:29.224 22:30:32 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.224 22:30:32 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:29.224 22:30:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.224 22:30:32 -- common/autotest_common.sh@10 -- # set +x 00:07:29.224 ************************************ 00:07:29.224 START TEST nvmf_tcp 00:07:29.224 ************************************ 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.224 * Looking for test storage... 00:07:29.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.224 22:30:32 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:29.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.224 --rc genhtml_branch_coverage=1 00:07:29.224 --rc genhtml_function_coverage=1 00:07:29.224 --rc genhtml_legend=1 00:07:29.224 --rc geninfo_all_blocks=1 00:07:29.224 --rc geninfo_unexecuted_blocks=1 00:07:29.224 00:07:29.224 ' 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:29.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.224 --rc genhtml_branch_coverage=1 00:07:29.224 --rc genhtml_function_coverage=1 00:07:29.224 --rc genhtml_legend=1 00:07:29.224 --rc geninfo_all_blocks=1 00:07:29.224 --rc geninfo_unexecuted_blocks=1 00:07:29.224 00:07:29.224 ' 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:29.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.224 --rc genhtml_branch_coverage=1 00:07:29.224 --rc genhtml_function_coverage=1 00:07:29.224 --rc genhtml_legend=1 00:07:29.224 --rc geninfo_all_blocks=1 00:07:29.224 --rc geninfo_unexecuted_blocks=1 00:07:29.224 00:07:29.224 ' 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:29.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.224 --rc genhtml_branch_coverage=1 00:07:29.224 --rc genhtml_function_coverage=1 00:07:29.224 --rc genhtml_legend=1 00:07:29.224 --rc geninfo_all_blocks=1 00:07:29.224 --rc geninfo_unexecuted_blocks=1 00:07:29.224 00:07:29.224 ' 00:07:29.224 22:30:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:29.224 22:30:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:29.224 22:30:32 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.224 22:30:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.224 ************************************ 00:07:29.224 START TEST nvmf_target_core 00:07:29.224 ************************************ 00:07:29.224 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:29.483 * Looking for test storage... 00:07:29.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.483 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:29.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.484 --rc genhtml_branch_coverage=1 00:07:29.484 --rc genhtml_function_coverage=1 00:07:29.484 --rc genhtml_legend=1 00:07:29.484 --rc geninfo_all_blocks=1 00:07:29.484 --rc geninfo_unexecuted_blocks=1 00:07:29.484 00:07:29.484 ' 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:29.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.484 --rc genhtml_branch_coverage=1 00:07:29.484 --rc genhtml_function_coverage=1 00:07:29.484 --rc genhtml_legend=1 00:07:29.484 --rc geninfo_all_blocks=1 00:07:29.484 --rc geninfo_unexecuted_blocks=1 00:07:29.484 00:07:29.484 ' 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:29.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.484 --rc genhtml_branch_coverage=1 00:07:29.484 --rc genhtml_function_coverage=1 00:07:29.484 --rc genhtml_legend=1 00:07:29.484 --rc geninfo_all_blocks=1 00:07:29.484 --rc geninfo_unexecuted_blocks=1 00:07:29.484 00:07:29.484 ' 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:29.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.484 --rc genhtml_branch_coverage=1 00:07:29.484 --rc genhtml_function_coverage=1 00:07:29.484 --rc genhtml_legend=1 00:07:29.484 --rc geninfo_all_blocks=1 00:07:29.484 --rc geninfo_unexecuted_blocks=1 00:07:29.484 00:07:29.484 ' 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.484 ************************************ 00:07:29.484 START TEST nvmf_abort 00:07:29.484 ************************************ 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:29.484 * Looking for test storage... 00:07:29.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:07:29.484 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:29.744 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:29.744 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.744 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.744 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:29.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.745 --rc genhtml_branch_coverage=1 00:07:29.745 --rc genhtml_function_coverage=1 00:07:29.745 --rc genhtml_legend=1 00:07:29.745 --rc geninfo_all_blocks=1 00:07:29.745 --rc geninfo_unexecuted_blocks=1 00:07:29.745 00:07:29.745 ' 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:29.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.745 --rc genhtml_branch_coverage=1 00:07:29.745 --rc genhtml_function_coverage=1 00:07:29.745 --rc genhtml_legend=1 00:07:29.745 --rc geninfo_all_blocks=1 00:07:29.745 --rc geninfo_unexecuted_blocks=1 00:07:29.745 00:07:29.745 ' 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:29.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.745 --rc genhtml_branch_coverage=1 00:07:29.745 --rc genhtml_function_coverage=1 00:07:29.745 --rc genhtml_legend=1 00:07:29.745 --rc geninfo_all_blocks=1 00:07:29.745 --rc geninfo_unexecuted_blocks=1 00:07:29.745 00:07:29.745 ' 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:29.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.745 --rc genhtml_branch_coverage=1 00:07:29.745 --rc genhtml_function_coverage=1 00:07:29.745 --rc genhtml_legend=1 00:07:29.745 --rc geninfo_all_blocks=1 00:07:29.745 --rc geninfo_unexecuted_blocks=1 00:07:29.745 00:07:29.745 ' 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.745 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.746 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:29.746 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:29.746 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.746 22:30:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:31.654 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:31.654 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:31.654 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:31.654 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.654 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:31.914 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:31.914 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.914 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.914 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.914 22:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:31.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:07:31.914 00:07:31.914 --- 10.0.0.2 ping statistics --- 00:07:31.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.914 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:07:31.914 00:07:31.914 --- 10.0.0.1 ping statistics --- 00:07:31.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.914 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=113629 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 113629 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 113629 ']' 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.914 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.914 [2024-10-11 22:30:35.180292] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:31.914 [2024-10-11 22:30:35.180378] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.173 [2024-10-11 22:30:35.247499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.173 [2024-10-11 22:30:35.296301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.173 [2024-10-11 22:30:35.296369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.173 [2024-10-11 22:30:35.296382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.173 [2024-10-11 22:30:35.296394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.173 [2024-10-11 22:30:35.296418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.173 [2024-10-11 22:30:35.297809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.173 [2024-10-11 22:30:35.297874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.173 [2024-10-11 22:30:35.297877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.173 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.173 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:32.173 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:32.173 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.173 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.432 [2024-10-11 22:30:35.448332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.432 Malloc0 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.432 Delay0 00:07:32.432 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.433 [2024-10-11 22:30:35.527676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.433 22:30:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:32.433 [2024-10-11 22:30:35.632364] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:34.970 Initializing NVMe Controllers 00:07:34.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:34.970 controller IO queue size 128 less than required 00:07:34.970 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:34.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:34.970 Initialization complete. Launching workers. 00:07:34.970 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28326 00:07:34.970 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28391, failed to submit 62 00:07:34.970 success 28330, unsuccessful 61, failed 0 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:34.970 rmmod nvme_tcp 00:07:34.970 rmmod nvme_fabrics 00:07:34.970 rmmod nvme_keyring 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 113629 ']' 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 113629 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 113629 ']' 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 113629 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113629 00:07:34.970 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:34.971 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:34.971 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113629' 00:07:34.971 killing process with pid 113629 00:07:34.971 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 113629 00:07:34.971 22:30:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 113629 00:07:34.971 22:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:34.971 22:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:34.971 22:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:34.971 22:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:34.971 22:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:07:34.971 22:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:34.971 22:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:07:34.971 22:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:34.971 22:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:34.971 22:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.971 22:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.971 22:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.886 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:36.886 00:07:36.886 real 0m7.444s 00:07:36.886 user 0m10.778s 00:07:36.886 sys 0m2.456s 00:07:36.886 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.886 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.886 ************************************ 00:07:36.886 END TEST nvmf_abort 00:07:36.886 ************************************ 00:07:36.887 22:30:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:36.887 22:30:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:36.887 22:30:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.887 22:30:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:36.887 ************************************ 00:07:36.887 START TEST nvmf_ns_hotplug_stress 00:07:36.887 ************************************ 00:07:36.887 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:37.146 * Looking for test storage... 00:07:37.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.146 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:37.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.147 --rc genhtml_branch_coverage=1 00:07:37.147 --rc genhtml_function_coverage=1 00:07:37.147 --rc genhtml_legend=1 00:07:37.147 --rc geninfo_all_blocks=1 00:07:37.147 --rc geninfo_unexecuted_blocks=1 00:07:37.147 00:07:37.147 ' 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:37.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.147 --rc genhtml_branch_coverage=1 00:07:37.147 --rc genhtml_function_coverage=1 00:07:37.147 --rc genhtml_legend=1 00:07:37.147 --rc geninfo_all_blocks=1 00:07:37.147 --rc geninfo_unexecuted_blocks=1 00:07:37.147 00:07:37.147 ' 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:37.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.147 --rc genhtml_branch_coverage=1 00:07:37.147 --rc genhtml_function_coverage=1 00:07:37.147 --rc genhtml_legend=1 00:07:37.147 --rc geninfo_all_blocks=1 00:07:37.147 --rc geninfo_unexecuted_blocks=1 00:07:37.147 00:07:37.147 ' 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:37.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.147 --rc genhtml_branch_coverage=1 00:07:37.147 --rc genhtml_function_coverage=1 00:07:37.147 --rc genhtml_legend=1 00:07:37.147 --rc geninfo_all_blocks=1 00:07:37.147 --rc geninfo_unexecuted_blocks=1 00:07:37.147 00:07:37.147 ' 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:37.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.147 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:37.148 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:37.148 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:37.148 22:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.684 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:39.685 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:39.685 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:39.685 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:39.685 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:39.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:07:39.685 00:07:39.685 --- 10.0.0.2 ping statistics --- 00:07:39.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.685 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:07:39.685 00:07:39.685 --- 10.0.0.1 ping statistics --- 00:07:39.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.685 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=115875 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 115875 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 115875 ']' 00:07:39.685 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.686 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.686 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.686 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.686 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:39.686 [2024-10-11 22:30:42.756415] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:07:39.686 [2024-10-11 22:30:42.756507] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.686 [2024-10-11 22:30:42.820503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.686 [2024-10-11 22:30:42.869986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.686 [2024-10-11 22:30:42.870039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.686 [2024-10-11 22:30:42.870068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.686 [2024-10-11 22:30:42.870079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.686 [2024-10-11 22:30:42.870089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.686 [2024-10-11 22:30:42.871697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.686 [2024-10-11 22:30:42.871763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.686 [2024-10-11 22:30:42.871759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.944 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.944 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:39.944 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:39.944 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.944 22:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:39.944 22:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.944 22:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:39.945 22:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:40.203 [2024-10-11 22:30:43.265522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.203 22:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:40.462 22:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.720 [2024-10-11 22:30:43.804435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.720 22:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.979 22:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:41.238 Malloc0 00:07:41.238 22:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:41.499 Delay0 00:07:41.499 22:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.758 22:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:42.017 NULL1 00:07:42.017 22:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:42.276 22:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=116293 00:07:42.276 22:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:42.276 22:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:42.276 22:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.650 Read completed with error (sct=0, sc=11) 00:07:43.650 22:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.908 22:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:43.908 22:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:44.167 true 00:07:44.167 22:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:44.167 22:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.734 22:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.992 22:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:44.992 22:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:45.251 true 00:07:45.251 22:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:45.251 22:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.818 22:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.818 22:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:45.818 22:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:46.076 true 00:07:46.076 22:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:46.076 22:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.343 22:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.605 22:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:46.605 22:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:46.863 true 00:07:47.121 22:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:47.121 22:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.056 22:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.314 22:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:48.314 22:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:48.572 true 00:07:48.572 22:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:48.572 22:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.830 22:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.089 22:30:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:49.089 22:30:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:49.347 true 00:07:49.347 22:30:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:49.347 22:30:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.283 22:30:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.541 22:30:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:50.541 22:30:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:50.541 true 00:07:50.799 22:30:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:50.799 22:30:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.056 22:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.315 22:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:51.315 22:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:51.573 true 00:07:51.573 22:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:51.573 22:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.831 22:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.088 22:30:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:52.088 22:30:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:52.346 true 00:07:52.346 22:30:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:52.346 22:30:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.312 22:30:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.571 22:30:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:53.571 22:30:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:53.829 true 00:07:53.829 22:30:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:53.829 22:30:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.087 22:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.345 22:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:54.345 22:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:54.603 true 00:07:54.603 22:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:54.603 22:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.537 22:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.794 22:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:55.794 22:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:56.052 true 00:07:56.053 22:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:56.053 22:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.310 22:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.569 22:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:56.569 22:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:56.827 true 00:07:56.827 22:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:56.827 22:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.762 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.762 22:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.020 22:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:58.020 22:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:58.278 true 00:07:58.278 22:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:58.278 22:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.535 22:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.793 22:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:58.793 22:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:59.051 true 00:07:59.051 22:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:07:59.051 22:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.986 22:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.986 22:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:59.986 22:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:00.243 true 00:08:00.243 22:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:00.243 22:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.501 22:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.760 22:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:00.760 22:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:01.326 true 00:08:01.326 22:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:01.326 22:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.326 22:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.584 22:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:01.584 22:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:01.842 true 00:08:02.100 22:31:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:02.100 22:31:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.033 22:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.291 22:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:03.291 22:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:03.549 true 00:08:03.549 22:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:03.549 22:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.807 22:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.065 22:31:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:04.065 22:31:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:04.322 true 00:08:04.323 22:31:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:04.323 22:31:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.580 22:31:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.838 22:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:04.838 22:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:05.096 true 00:08:05.096 22:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:05.096 22:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.029 22:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.287 22:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:06.287 22:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:06.545 true 00:08:06.545 22:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:06.545 22:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.804 22:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.062 22:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:07.062 22:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:07.628 true 00:08:07.628 22:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:07.628 22:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.194 22:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.709 22:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:08.709 22:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:08.968 true 00:08:08.968 22:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:08.968 22:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.226 22:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.484 22:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:09.484 22:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:09.742 true 00:08:09.742 22:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:09.742 22:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.672 22:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.672 22:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:10.672 22:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:10.930 true 00:08:10.930 22:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:10.930 22:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.496 22:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.496 22:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:11.496 22:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:11.754 true 00:08:11.754 22:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:11.754 22:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.688 Initializing NVMe Controllers 00:08:12.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:12.689 Controller IO queue size 128, less than required. 00:08:12.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.689 Controller IO queue size 128, less than required. 00:08:12.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:12.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:12.689 Initialization complete. Launching workers. 00:08:12.689 ======================================================== 00:08:12.689 Latency(us) 00:08:12.689 Device Information : IOPS MiB/s Average min max 00:08:12.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 809.56 0.40 77195.28 3528.36 1023217.81 00:08:12.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9996.86 4.88 12803.97 3316.43 535558.22 00:08:12.689 ======================================================== 00:08:12.689 Total : 10806.42 5.28 17627.81 3316.43 1023217.81 00:08:12.689 00:08:12.689 22:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.946 22:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:12.946 22:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:13.204 true 00:08:13.204 22:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116293 00:08:13.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (116293) - No such process 00:08:13.204 22:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 116293 00:08:13.204 22:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.771 22:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.771 22:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:13.771 22:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:13.771 22:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:13.771 22:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.771 22:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:14.029 null0 00:08:14.029 22:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.029 22:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.029 22:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:14.287 null1 00:08:14.287 22:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.287 22:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.287 22:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:14.545 null2 00:08:14.803 22:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.803 22:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.803 22:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:14.803 null3 00:08:15.061 22:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.061 22:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.061 22:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:15.318 null4 00:08:15.319 22:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.319 22:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.319 22:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:15.577 null5 00:08:15.577 22:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.577 22:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.577 22:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:15.835 null6 00:08:15.835 22:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.835 22:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.835 22:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:16.093 null7 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.093 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 120362 120363 120365 120367 120369 120371 120373 120375 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.094 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.353 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.353 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.353 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.353 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.353 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.353 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.353 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.353 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.612 22:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.871 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.871 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.871 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.871 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.871 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.871 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.871 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.871 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.129 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.129 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.129 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.129 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.129 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.129 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.388 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.388 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.388 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.388 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.647 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.647 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.647 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.647 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.647 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.647 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.647 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.647 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.905 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.905 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.905 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.905 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.905 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.905 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.905 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.905 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.906 22:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.906 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.906 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.906 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.165 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.165 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.165 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.165 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.165 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.165 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.165 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.165 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.423 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.424 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.424 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.424 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.424 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.424 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.424 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.424 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.424 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.424 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.682 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.682 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.682 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.682 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.682 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.682 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.682 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.682 22:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.941 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.200 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.200 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.200 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.200 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.200 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.200 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.200 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.200 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.200 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.459 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.459 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.459 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.459 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.459 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.459 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.459 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.459 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.718 22:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.977 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.977 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.977 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.977 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.977 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.977 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.977 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.978 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.236 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.237 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.496 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.496 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.496 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.496 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.496 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.496 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.496 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.496 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.755 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.755 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.755 22:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.755 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.014 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.014 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.014 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.273 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.273 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.273 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.273 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.273 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.273 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.273 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.273 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.532 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.791 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.791 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.791 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.791 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.791 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.791 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.791 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.791 22:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.050 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.051 rmmod nvme_tcp 00:08:22.051 rmmod nvme_fabrics 00:08:22.051 rmmod nvme_keyring 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 115875 ']' 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 115875 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 115875 ']' 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 115875 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.051 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115875 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115875' 00:08:22.311 killing process with pid 115875 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 115875 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 115875 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.311 22:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.857 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:24.857 00:08:24.857 real 0m47.453s 00:08:24.857 user 3m40.275s 00:08:24.857 sys 0m16.196s 00:08:24.857 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.857 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.857 ************************************ 00:08:24.857 END TEST nvmf_ns_hotplug_stress 00:08:24.857 ************************************ 00:08:24.857 22:31:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:24.857 22:31:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:24.857 22:31:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.857 22:31:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.857 ************************************ 00:08:24.857 START TEST nvmf_delete_subsystem 00:08:24.857 ************************************ 00:08:24.857 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:24.857 * Looking for test storage... 00:08:24.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.857 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:24.857 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:24.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.858 --rc genhtml_branch_coverage=1 00:08:24.858 --rc genhtml_function_coverage=1 00:08:24.858 --rc genhtml_legend=1 00:08:24.858 --rc geninfo_all_blocks=1 00:08:24.858 --rc geninfo_unexecuted_blocks=1 00:08:24.858 00:08:24.858 ' 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:24.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.858 --rc genhtml_branch_coverage=1 00:08:24.858 --rc genhtml_function_coverage=1 00:08:24.858 --rc genhtml_legend=1 00:08:24.858 --rc geninfo_all_blocks=1 00:08:24.858 --rc geninfo_unexecuted_blocks=1 00:08:24.858 00:08:24.858 ' 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:24.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.858 --rc genhtml_branch_coverage=1 00:08:24.858 --rc genhtml_function_coverage=1 00:08:24.858 --rc genhtml_legend=1 00:08:24.858 --rc geninfo_all_blocks=1 00:08:24.858 --rc geninfo_unexecuted_blocks=1 00:08:24.858 00:08:24.858 ' 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:24.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.858 --rc genhtml_branch_coverage=1 00:08:24.858 --rc genhtml_function_coverage=1 00:08:24.858 --rc genhtml_legend=1 00:08:24.858 --rc geninfo_all_blocks=1 00:08:24.858 --rc geninfo_unexecuted_blocks=1 00:08:24.858 00:08:24.858 ' 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.858 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:24.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:24.859 22:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:26.772 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:26.773 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:26.773 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:26.773 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:26.773 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:26.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:08:26.773 00:08:26.773 --- 10.0.0.2 ping statistics --- 00:08:26.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.773 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:08:26.773 00:08:26.773 --- 10.0.0.1 ping statistics --- 00:08:26.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.773 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:26.773 22:31:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:26.773 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:26.773 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:26.773 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:26.773 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.773 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=123268 00:08:26.774 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:26.774 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 123268 00:08:26.774 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 123268 ']' 00:08:26.774 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.774 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.774 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.774 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.774 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.034 [2024-10-11 22:31:30.058333] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:08:27.034 [2024-10-11 22:31:30.058408] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.034 [2024-10-11 22:31:30.124722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:27.034 [2024-10-11 22:31:30.174478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.034 [2024-10-11 22:31:30.174532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.034 [2024-10-11 22:31:30.174570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.034 [2024-10-11 22:31:30.174582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.034 [2024-10-11 22:31:30.174592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.034 [2024-10-11 22:31:30.176094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.034 [2024-10-11 22:31:30.176099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.034 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.034 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:27.034 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:27.034 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.034 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 [2024-10-11 22:31:30.329446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 [2024-10-11 22:31:30.345700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 NULL1 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 Delay0 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.293 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.294 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.294 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.294 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.294 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=123290 00:08:27.294 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:27.294 22:31:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:27.294 [2024-10-11 22:31:30.420567] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:29.195 22:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.195 22:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.195 22:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.454 Write completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 starting I/O failed: -6 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Write completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 starting I/O failed: -6 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Write completed with error (sct=0, sc=8) 00:08:29.454 Write completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 starting I/O failed: -6 00:08:29.454 Write completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 starting I/O failed: -6 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 starting I/O failed: -6 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 starting I/O failed: -6 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.454 Write completed with error (sct=0, sc=8) 00:08:29.454 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 [2024-10-11 22:31:32.541945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928290 is same with the state(6) to be set 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Write completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 Read completed with error (sct=0, sc=8) 00:08:29.455 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 Read completed with error (sct=0, sc=8) 00:08:29.456 Write completed with error (sct=0, sc=8) 00:08:29.456 starting I/O failed: -6 00:08:29.456 starting I/O failed: -6 00:08:29.456 starting I/O failed: -6 00:08:29.456 starting I/O failed: -6 00:08:30.392 [2024-10-11 22:31:33.516004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1925d00 is same with the state(6) to be set 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 [2024-10-11 22:31:33.542837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f28e400cfe0 is same with the state(6) to be set 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 [2024-10-11 22:31:33.543401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f28e400d640 is same with the state(6) to be set 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 [2024-10-11 22:31:33.546248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280b0 is same with the state(6) to be set 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Read completed with error (sct=0, sc=8) 00:08:30.392 Write completed with error (sct=0, sc=8) 00:08:30.392 [2024-10-11 22:31:33.546458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19285c0 is same with the state(6) to be set 00:08:30.392 Initializing NVMe Controllers 00:08:30.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:30.392 Controller IO queue size 128, less than required. 00:08:30.392 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:30.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:30.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:30.392 Initialization complete. Launching workers. 00:08:30.392 ======================================================== 00:08:30.393 Latency(us) 00:08:30.393 Device Information : IOPS MiB/s Average min max 00:08:30.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.30 0.08 897108.32 480.54 1012260.45 00:08:30.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.73 0.09 916020.10 517.35 1012407.33 00:08:30.393 ======================================================== 00:08:30.393 Total : 348.03 0.17 906820.51 480.54 1012407.33 00:08:30.393 00:08:30.393 [2024-10-11 22:31:33.547339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1925d00 (9): Bad file descriptor 00:08:30.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:30.393 22:31:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.393 22:31:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:30.393 22:31:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 123290 00:08:30.393 22:31:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:30.960 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:30.960 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 123290 00:08:30.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (123290) - No such process 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 123290 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 123290 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 123290 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.961 [2024-10-11 22:31:34.072105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=123698 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123698 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:30.961 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:30.961 [2024-10-11 22:31:34.131923] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:31.528 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.528 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123698 00:08:31.528 22:31:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.094 22:31:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.094 22:31:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123698 00:08:32.094 22:31:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.353 22:31:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.353 22:31:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123698 00:08:32.353 22:31:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.920 22:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.920 22:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123698 00:08:32.920 22:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.487 22:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.487 22:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123698 00:08:33.487 22:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:34.054 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:34.054 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123698 00:08:34.054 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:34.054 Initializing NVMe Controllers 00:08:34.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:34.054 Controller IO queue size 128, less than required. 00:08:34.054 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:34.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:34.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:34.054 Initialization complete. Launching workers. 00:08:34.054 ======================================================== 00:08:34.054 Latency(us) 00:08:34.054 Device Information : IOPS MiB/s Average min max 00:08:34.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003500.01 1000163.83 1041549.16 00:08:34.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004496.38 1000166.39 1012195.40 00:08:34.054 ======================================================== 00:08:34.054 Total : 256.00 0.12 1003998.19 1000163.83 1041549.16 00:08:34.054 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123698 00:08:34.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (123698) - No such process 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 123698 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.631 rmmod nvme_tcp 00:08:34.631 rmmod nvme_fabrics 00:08:34.631 rmmod nvme_keyring 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 123268 ']' 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 123268 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 123268 ']' 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 123268 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123268 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123268' 00:08:34.631 killing process with pid 123268 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 123268 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 123268 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.631 22:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.181 22:31:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.181 00:08:37.181 real 0m12.280s 00:08:37.181 user 0m27.773s 00:08:37.181 sys 0m2.947s 00:08:37.181 22:31:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.181 22:31:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:37.181 ************************************ 00:08:37.181 END TEST nvmf_delete_subsystem 00:08:37.181 ************************************ 00:08:37.181 22:31:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:37.181 22:31:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:37.181 22:31:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.181 22:31:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.181 ************************************ 00:08:37.181 START TEST nvmf_host_management 00:08:37.181 ************************************ 00:08:37.181 22:31:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:37.181 * Looking for test storage... 00:08:37.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:37.181 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:37.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.182 --rc genhtml_branch_coverage=1 00:08:37.182 --rc genhtml_function_coverage=1 00:08:37.182 --rc genhtml_legend=1 00:08:37.182 --rc geninfo_all_blocks=1 00:08:37.182 --rc geninfo_unexecuted_blocks=1 00:08:37.182 00:08:37.182 ' 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:37.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.182 --rc genhtml_branch_coverage=1 00:08:37.182 --rc genhtml_function_coverage=1 00:08:37.182 --rc genhtml_legend=1 00:08:37.182 --rc geninfo_all_blocks=1 00:08:37.182 --rc geninfo_unexecuted_blocks=1 00:08:37.182 00:08:37.182 ' 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:37.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.182 --rc genhtml_branch_coverage=1 00:08:37.182 --rc genhtml_function_coverage=1 00:08:37.182 --rc genhtml_legend=1 00:08:37.182 --rc geninfo_all_blocks=1 00:08:37.182 --rc geninfo_unexecuted_blocks=1 00:08:37.182 00:08:37.182 ' 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:37.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.182 --rc genhtml_branch_coverage=1 00:08:37.182 --rc genhtml_function_coverage=1 00:08:37.182 --rc genhtml_legend=1 00:08:37.182 --rc geninfo_all_blocks=1 00:08:37.182 --rc geninfo_unexecuted_blocks=1 00:08:37.182 00:08:37.182 ' 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.182 22:31:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.096 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:39.097 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:39.097 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:39.097 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:39.097 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.097 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:08:39.357 00:08:39.357 --- 10.0.0.2 ping statistics --- 00:08:39.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.357 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:08:39.357 00:08:39.357 --- 10.0.0.1 ping statistics --- 00:08:39.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.357 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=126169 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 126169 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 126169 ']' 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.357 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.357 [2024-10-11 22:31:42.503509] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:08:39.357 [2024-10-11 22:31:42.503623] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.358 [2024-10-11 22:31:42.569016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.358 [2024-10-11 22:31:42.614301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.358 [2024-10-11 22:31:42.614370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.358 [2024-10-11 22:31:42.614398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.358 [2024-10-11 22:31:42.614408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.358 [2024-10-11 22:31:42.614418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.358 [2024-10-11 22:31:42.616054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.358 [2024-10-11 22:31:42.616121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.358 [2024-10-11 22:31:42.616184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:39.358 [2024-10-11 22:31:42.616187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.617 [2024-10-11 22:31:42.785711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.617 Malloc0 00:08:39.617 [2024-10-11 22:31:42.862487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.617 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.876 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=126218 00:08:39.876 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 126218 /var/tmp/bdevperf.sock 00:08:39.876 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 126218 ']' 00:08:39.876 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.876 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:39.877 { 00:08:39.877 "params": { 00:08:39.877 "name": "Nvme$subsystem", 00:08:39.877 "trtype": "$TEST_TRANSPORT", 00:08:39.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.877 "adrfam": "ipv4", 00:08:39.877 "trsvcid": "$NVMF_PORT", 00:08:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.877 "hdgst": ${hdgst:-false}, 00:08:39.877 "ddgst": ${ddgst:-false} 00:08:39.877 }, 00:08:39.877 "method": "bdev_nvme_attach_controller" 00:08:39.877 } 00:08:39.877 EOF 00:08:39.877 )") 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:39.877 22:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:39.877 "params": { 00:08:39.877 "name": "Nvme0", 00:08:39.877 "trtype": "tcp", 00:08:39.877 "traddr": "10.0.0.2", 00:08:39.877 "adrfam": "ipv4", 00:08:39.877 "trsvcid": "4420", 00:08:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.877 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.877 "hdgst": false, 00:08:39.877 "ddgst": false 00:08:39.877 }, 00:08:39.877 "method": "bdev_nvme_attach_controller" 00:08:39.877 }' 00:08:39.877 [2024-10-11 22:31:42.947372] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:08:39.877 [2024-10-11 22:31:42.947446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126218 ] 00:08:39.877 [2024-10-11 22:31:43.010508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.877 [2024-10-11 22:31:43.057567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.136 Running I/O for 10 seconds... 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:40.136 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.398 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.398 [2024-10-11 22:31:43.633144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.398 [2024-10-11 22:31:43.633684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.399 [2024-10-11 22:31:43.633701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.399 [2024-10-11 22:31:43.633713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.399 [2024-10-11 22:31:43.633725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.399 [2024-10-11 22:31:43.633736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.399 [2024-10-11 22:31:43.633747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.399 [2024-10-11 22:31:43.633759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862670 is same with the state(6) to be set 00:08:40.399 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.399 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.399 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.399 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.399 [2024-10-11 22:31:43.638205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.638256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.638297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.638325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.638357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.638395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.638427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.638454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.638483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.638510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.638540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.638579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.638619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.638648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.638677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.638704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.638735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.638764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.638792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.638819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.638858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.638886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.638923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.638951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.638979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.399 [2024-10-11 22:31:43.639964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.399 [2024-10-11 22:31:43.639980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.639994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.400 [2024-10-11 22:31:43.640953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.640988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:40.400 [2024-10-11 22:31:43.641059] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1856e80 was disconnected and freed. reset controller. 00:08:40.400 [2024-10-11 22:31:43.641133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.400 [2024-10-11 22:31:43.641156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.641173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.400 [2024-10-11 22:31:43.641186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.641201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.400 [2024-10-11 22:31:43.641215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.641229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.400 [2024-10-11 22:31:43.641243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.400 [2024-10-11 22:31:43.641257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163de00 is same with the state(6) to be set 00:08:40.400 [2024-10-11 22:31:43.642379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:40.400 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:40.400 00:08:40.400 Latency(us) 00:08:40.400 [2024-10-11T20:31:43.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.400 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:40.400 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:40.400 Verification LBA range: start 0x0 length 0x400 00:08:40.401 Nvme0n1 : 0.40 1588.65 99.29 158.87 0.00 35573.47 3689.43 34175.81 00:08:40.401 [2024-10-11T20:31:43.669Z] =================================================================================================================== 00:08:40.401 [2024-10-11T20:31:43.669Z] Total : 1588.65 99.29 158.87 0.00 35573.47 3689.43 34175.81 00:08:40.401 [2024-10-11 22:31:43.644241] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.401 [2024-10-11 22:31:43.644282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163de00 (9): Bad file descriptor 00:08:40.401 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.401 22:31:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:40.660 [2024-10-11 22:31:43.697823] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.594 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 126218 00:08:41.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (126218) - No such process 00:08:41.594 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:41.594 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:41.594 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:41.594 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:41.594 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:41.594 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:41.594 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:41.594 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:41.594 { 00:08:41.594 "params": { 00:08:41.594 "name": "Nvme$subsystem", 00:08:41.594 "trtype": "$TEST_TRANSPORT", 00:08:41.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.594 "adrfam": "ipv4", 00:08:41.594 "trsvcid": "$NVMF_PORT", 00:08:41.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.594 "hdgst": ${hdgst:-false}, 00:08:41.594 "ddgst": ${ddgst:-false} 00:08:41.594 }, 00:08:41.595 "method": "bdev_nvme_attach_controller" 00:08:41.595 } 00:08:41.595 EOF 00:08:41.595 )") 00:08:41.595 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:41.595 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:41.595 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:41.595 22:31:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:41.595 "params": { 00:08:41.595 "name": "Nvme0", 00:08:41.595 "trtype": "tcp", 00:08:41.595 "traddr": "10.0.0.2", 00:08:41.595 "adrfam": "ipv4", 00:08:41.595 "trsvcid": "4420", 00:08:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:41.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:41.595 "hdgst": false, 00:08:41.595 "ddgst": false 00:08:41.595 }, 00:08:41.595 "method": "bdev_nvme_attach_controller" 00:08:41.595 }' 00:08:41.595 [2024-10-11 22:31:44.700199] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:08:41.595 [2024-10-11 22:31:44.700277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126456 ] 00:08:41.595 [2024-10-11 22:31:44.763932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.595 [2024-10-11 22:31:44.810118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.163 Running I/O for 1 seconds... 00:08:43.098 1625.00 IOPS, 101.56 MiB/s 00:08:43.098 Latency(us) 00:08:43.098 [2024-10-11T20:31:46.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.098 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:43.098 Verification LBA range: start 0x0 length 0x400 00:08:43.098 Nvme0n1 : 1.04 1662.60 103.91 0.00 0.00 37880.37 7524.50 33399.09 00:08:43.098 [2024-10-11T20:31:46.366Z] =================================================================================================================== 00:08:43.098 [2024-10-11T20:31:46.366Z] Total : 1662.60 103.91 0.00 0.00 37880.37 7524.50 33399.09 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.357 rmmod nvme_tcp 00:08:43.357 rmmod nvme_fabrics 00:08:43.357 rmmod nvme_keyring 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 126169 ']' 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 126169 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 126169 ']' 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 126169 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 126169 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 126169' 00:08:43.357 killing process with pid 126169 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 126169 00:08:43.357 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 126169 00:08:43.618 [2024-10-11 22:31:46.689043] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:43.618 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:43.618 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:43.618 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:43.618 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:43.618 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:08:43.618 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:43.618 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:08:43.618 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.618 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.618 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.618 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.618 22:31:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.529 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:45.529 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:45.529 00:08:45.529 real 0m8.808s 00:08:45.529 user 0m19.553s 00:08:45.529 sys 0m2.778s 00:08:45.529 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.529 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.529 ************************************ 00:08:45.529 END TEST nvmf_host_management 00:08:45.529 ************************************ 00:08:45.529 22:31:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:45.529 22:31:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:45.529 22:31:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.529 22:31:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.789 ************************************ 00:08:45.789 START TEST nvmf_lvol 00:08:45.789 ************************************ 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:45.789 * Looking for test storage... 00:08:45.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:45.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.789 --rc genhtml_branch_coverage=1 00:08:45.789 --rc genhtml_function_coverage=1 00:08:45.789 --rc genhtml_legend=1 00:08:45.789 --rc geninfo_all_blocks=1 00:08:45.789 --rc geninfo_unexecuted_blocks=1 00:08:45.789 00:08:45.789 ' 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:45.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.789 --rc genhtml_branch_coverage=1 00:08:45.789 --rc genhtml_function_coverage=1 00:08:45.789 --rc genhtml_legend=1 00:08:45.789 --rc geninfo_all_blocks=1 00:08:45.789 --rc geninfo_unexecuted_blocks=1 00:08:45.789 00:08:45.789 ' 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:45.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.789 --rc genhtml_branch_coverage=1 00:08:45.789 --rc genhtml_function_coverage=1 00:08:45.789 --rc genhtml_legend=1 00:08:45.789 --rc geninfo_all_blocks=1 00:08:45.789 --rc geninfo_unexecuted_blocks=1 00:08:45.789 00:08:45.789 ' 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:45.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.789 --rc genhtml_branch_coverage=1 00:08:45.789 --rc genhtml_function_coverage=1 00:08:45.789 --rc genhtml_legend=1 00:08:45.789 --rc geninfo_all_blocks=1 00:08:45.789 --rc geninfo_unexecuted_blocks=1 00:08:45.789 00:08:45.789 ' 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:45.789 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.790 22:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.790 22:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:45.790 22:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:45.790 22:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.790 22:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.327 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:48.327 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:48.328 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:48.328 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:48.328 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:48.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:08:48.328 00:08:48.328 --- 10.0.0.2 ping statistics --- 00:08:48.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.328 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:08:48.328 00:08:48.328 --- 10.0.0.1 ping statistics --- 00:08:48.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.328 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=128607 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 128607 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 128607 ']' 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.328 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.328 [2024-10-11 22:31:51.469423] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:08:48.328 [2024-10-11 22:31:51.469492] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.328 [2024-10-11 22:31:51.535010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:48.328 [2024-10-11 22:31:51.579177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.328 [2024-10-11 22:31:51.579232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.328 [2024-10-11 22:31:51.579256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.328 [2024-10-11 22:31:51.579267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.328 [2024-10-11 22:31:51.579275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.328 [2024-10-11 22:31:51.580708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.328 [2024-10-11 22:31:51.580768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.328 [2024-10-11 22:31:51.580771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.587 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.587 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:48.587 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:48.587 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:48.587 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.587 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.587 22:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:48.845 [2024-10-11 22:31:51.978488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.845 22:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:49.103 22:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:49.103 22:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:49.363 22:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:49.363 22:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:49.621 22:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:50.187 22:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ca8e901f-7018-4b1a-afca-760779cab059 00:08:50.187 22:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ca8e901f-7018-4b1a-afca-760779cab059 lvol 20 00:08:50.187 22:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=915cc31d-c2d9-4bd7-a0b3-4fb1d55c36d7 00:08:50.187 22:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:50.446 22:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 915cc31d-c2d9-4bd7-a0b3-4fb1d55c36d7 00:08:51.012 22:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:51.012 [2024-10-11 22:31:54.237033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.012 22:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.270 22:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=129022 00:08:51.270 22:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:51.270 22:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:52.646 22:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 915cc31d-c2d9-4bd7-a0b3-4fb1d55c36d7 MY_SNAPSHOT 00:08:52.646 22:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=05c0959a-9d25-4bd9-bf76-b317fffa169c 00:08:52.646 22:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 915cc31d-c2d9-4bd7-a0b3-4fb1d55c36d7 30 00:08:52.904 22:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 05c0959a-9d25-4bd9-bf76-b317fffa169c MY_CLONE 00:08:53.471 22:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c1bbba60-1c03-4c52-b5fe-435694985a41 00:08:53.471 22:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c1bbba60-1c03-4c52-b5fe-435694985a41 00:08:54.037 22:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 129022 00:09:02.148 Initializing NVMe Controllers 00:09:02.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:02.148 Controller IO queue size 128, less than required. 00:09:02.148 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:02.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:02.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:02.148 Initialization complete. Launching workers. 00:09:02.148 ======================================================== 00:09:02.148 Latency(us) 00:09:02.148 Device Information : IOPS MiB/s Average min max 00:09:02.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10566.49 41.28 12119.19 425.53 58003.26 00:09:02.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10492.99 40.99 12202.29 1912.45 50130.28 00:09:02.148 ======================================================== 00:09:02.148 Total : 21059.47 82.26 12160.59 425.53 58003.26 00:09:02.148 00:09:02.148 22:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:02.148 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 915cc31d-c2d9-4bd7-a0b3-4fb1d55c36d7 00:09:02.407 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ca8e901f-7018-4b1a-afca-760779cab059 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:02.665 rmmod nvme_tcp 00:09:02.665 rmmod nvme_fabrics 00:09:02.665 rmmod nvme_keyring 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 128607 ']' 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 128607 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 128607 ']' 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 128607 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128607 00:09:02.665 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.666 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.666 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128607' 00:09:02.666 killing process with pid 128607 00:09:02.666 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 128607 00:09:02.666 22:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 128607 00:09:02.924 22:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:02.924 22:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:02.924 22:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:02.924 22:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:02.924 22:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:09:02.924 22:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:02.924 22:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:09:02.924 22:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:02.924 22:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:02.924 22:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.924 22:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.924 22:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.468 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.468 00:09:05.468 real 0m19.344s 00:09:05.468 user 1m5.334s 00:09:05.469 sys 0m5.818s 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:05.469 ************************************ 00:09:05.469 END TEST nvmf_lvol 00:09:05.469 ************************************ 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.469 ************************************ 00:09:05.469 START TEST nvmf_lvs_grow 00:09:05.469 ************************************ 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:05.469 * Looking for test storage... 00:09:05.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:05.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.469 --rc genhtml_branch_coverage=1 00:09:05.469 --rc genhtml_function_coverage=1 00:09:05.469 --rc genhtml_legend=1 00:09:05.469 --rc geninfo_all_blocks=1 00:09:05.469 --rc geninfo_unexecuted_blocks=1 00:09:05.469 00:09:05.469 ' 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:05.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.469 --rc genhtml_branch_coverage=1 00:09:05.469 --rc genhtml_function_coverage=1 00:09:05.469 --rc genhtml_legend=1 00:09:05.469 --rc geninfo_all_blocks=1 00:09:05.469 --rc geninfo_unexecuted_blocks=1 00:09:05.469 00:09:05.469 ' 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:05.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.469 --rc genhtml_branch_coverage=1 00:09:05.469 --rc genhtml_function_coverage=1 00:09:05.469 --rc genhtml_legend=1 00:09:05.469 --rc geninfo_all_blocks=1 00:09:05.469 --rc geninfo_unexecuted_blocks=1 00:09:05.469 00:09:05.469 ' 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:05.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.469 --rc genhtml_branch_coverage=1 00:09:05.469 --rc genhtml_function_coverage=1 00:09:05.469 --rc genhtml_legend=1 00:09:05.469 --rc geninfo_all_blocks=1 00:09:05.469 --rc geninfo_unexecuted_blocks=1 00:09:05.469 00:09:05.469 ' 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.469 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.470 22:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:07.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:07.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:07.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:07.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.382 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.383 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:07.383 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:07.383 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.383 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.383 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:07.383 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:07.383 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.383 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.642 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.642 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.642 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:07.642 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.642 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.642 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.642 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:07.642 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:07.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:09:07.642 00:09:07.642 --- 10.0.0.2 ping statistics --- 00:09:07.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.642 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:09:07.642 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:09:07.642 00:09:07.642 --- 10.0.0.1 ping statistics --- 00:09:07.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.642 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:07.642 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.642 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=132418 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 132418 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 132418 ']' 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.643 22:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.643 [2024-10-11 22:32:10.795961] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:09:07.643 [2024-10-11 22:32:10.796028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.643 [2024-10-11 22:32:10.856894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.643 [2024-10-11 22:32:10.899029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.643 [2024-10-11 22:32:10.899084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.643 [2024-10-11 22:32:10.899105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.643 [2024-10-11 22:32:10.899116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.643 [2024-10-11 22:32:10.899125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.643 [2024-10-11 22:32:10.899732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.901 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.901 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:07.901 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:07.902 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.902 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.902 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.902 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:08.160 [2024-10-11 22:32:11.287116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.160 ************************************ 00:09:08.160 START TEST lvs_grow_clean 00:09:08.160 ************************************ 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.160 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.419 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:08.419 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:08.677 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=84c065f7-0c89-490f-ba31-ae2c381f56f3 00:09:08.677 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84c065f7-0c89-490f-ba31-ae2c381f56f3 00:09:08.677 22:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:08.935 22:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:08.935 22:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:08.935 22:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 84c065f7-0c89-490f-ba31-ae2c381f56f3 lvol 150 00:09:09.194 22:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=aff01097-c1df-4fd8-a2b8-a1792ca93342 00:09:09.194 22:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.194 22:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:09.452 [2024-10-11 22:32:12.700881] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:09.453 [2024-10-11 22:32:12.700965] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:09.453 true 00:09:09.453 22:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:09.453 22:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84c065f7-0c89-490f-ba31-ae2c381f56f3 00:09:10.020 22:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:10.020 22:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:10.020 22:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aff01097-c1df-4fd8-a2b8-a1792ca93342 00:09:10.278 22:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:10.537 [2024-10-11 22:32:13.784180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.537 22:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=132793 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 132793 /var/tmp/bdevperf.sock 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 132793 ']' 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:11.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:11.105 [2024-10-11 22:32:14.112385] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:09:11.105 [2024-10-11 22:32:14.112453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132793 ] 00:09:11.105 [2024-10-11 22:32:14.171985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.105 [2024-10-11 22:32:14.216631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:11.105 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:11.672 Nvme0n1 00:09:11.672 22:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:11.931 [ 00:09:11.931 { 00:09:11.931 "name": "Nvme0n1", 00:09:11.931 "aliases": [ 00:09:11.931 "aff01097-c1df-4fd8-a2b8-a1792ca93342" 00:09:11.931 ], 00:09:11.931 "product_name": "NVMe disk", 00:09:11.931 "block_size": 4096, 00:09:11.931 "num_blocks": 38912, 00:09:11.931 "uuid": "aff01097-c1df-4fd8-a2b8-a1792ca93342", 00:09:11.931 "numa_id": 0, 00:09:11.931 "assigned_rate_limits": { 00:09:11.931 "rw_ios_per_sec": 0, 00:09:11.931 "rw_mbytes_per_sec": 0, 00:09:11.931 "r_mbytes_per_sec": 0, 00:09:11.931 "w_mbytes_per_sec": 0 00:09:11.931 }, 00:09:11.931 "claimed": false, 00:09:11.931 "zoned": false, 00:09:11.931 "supported_io_types": { 00:09:11.931 "read": true, 00:09:11.931 "write": true, 00:09:11.931 "unmap": true, 00:09:11.931 "flush": true, 00:09:11.931 "reset": true, 00:09:11.931 "nvme_admin": true, 00:09:11.931 "nvme_io": true, 00:09:11.931 "nvme_io_md": false, 00:09:11.931 "write_zeroes": true, 00:09:11.931 "zcopy": false, 00:09:11.931 "get_zone_info": false, 00:09:11.931 "zone_management": false, 00:09:11.931 "zone_append": false, 00:09:11.931 "compare": true, 00:09:11.931 "compare_and_write": true, 00:09:11.931 "abort": true, 00:09:11.931 "seek_hole": false, 00:09:11.931 "seek_data": false, 00:09:11.931 "copy": true, 00:09:11.931 "nvme_iov_md": false 00:09:11.931 }, 00:09:11.931 "memory_domains": [ 00:09:11.931 { 00:09:11.931 "dma_device_id": "system", 00:09:11.931 "dma_device_type": 1 00:09:11.931 } 00:09:11.931 ], 00:09:11.931 "driver_specific": { 00:09:11.931 "nvme": [ 00:09:11.931 { 00:09:11.931 "trid": { 00:09:11.931 "trtype": "TCP", 00:09:11.931 "adrfam": "IPv4", 00:09:11.931 "traddr": "10.0.0.2", 00:09:11.931 "trsvcid": "4420", 00:09:11.931 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:11.931 }, 00:09:11.931 "ctrlr_data": { 00:09:11.931 "cntlid": 1, 00:09:11.931 "vendor_id": "0x8086", 00:09:11.931 "model_number": "SPDK bdev Controller", 00:09:11.931 "serial_number": "SPDK0", 00:09:11.931 "firmware_revision": "25.01", 00:09:11.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:11.932 "oacs": { 00:09:11.932 "security": 0, 00:09:11.932 "format": 0, 00:09:11.932 "firmware": 0, 00:09:11.932 "ns_manage": 0 00:09:11.932 }, 00:09:11.932 "multi_ctrlr": true, 00:09:11.932 "ana_reporting": false 00:09:11.932 }, 00:09:11.932 "vs": { 00:09:11.932 "nvme_version": "1.3" 00:09:11.932 }, 00:09:11.932 "ns_data": { 00:09:11.932 "id": 1, 00:09:11.932 "can_share": true 00:09:11.932 } 00:09:11.932 } 00:09:11.932 ], 00:09:11.932 "mp_policy": "active_passive" 00:09:11.932 } 00:09:11.932 } 00:09:11.932 ] 00:09:11.932 22:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=132887 00:09:11.932 22:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:11.932 22:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:12.191 Running I/O for 10 seconds... 00:09:13.131 Latency(us) 00:09:13.131 [2024-10-11T20:32:16.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.131 Nvme0n1 : 1.00 15498.00 60.54 0.00 0.00 0.00 0.00 0.00 00:09:13.131 [2024-10-11T20:32:16.399Z] =================================================================================================================== 00:09:13.131 [2024-10-11T20:32:16.399Z] Total : 15498.00 60.54 0.00 0.00 0.00 0.00 0.00 00:09:13.131 00:09:14.068 22:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 84c065f7-0c89-490f-ba31-ae2c381f56f3 00:09:14.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.068 Nvme0n1 : 2.00 15646.00 61.12 0.00 0.00 0.00 0.00 0.00 00:09:14.068 [2024-10-11T20:32:17.336Z] =================================================================================================================== 00:09:14.068 [2024-10-11T20:32:17.336Z] Total : 15646.00 61.12 0.00 0.00 0.00 0.00 0.00 00:09:14.068 00:09:14.327 true 00:09:14.327 22:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84c065f7-0c89-490f-ba31-ae2c381f56f3 00:09:14.327 22:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:14.586 22:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:14.586 22:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:14.586 22:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 132887 00:09:15.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.153 Nvme0n1 : 3.00 15706.00 61.35 0.00 0.00 0.00 0.00 0.00 00:09:15.153 [2024-10-11T20:32:18.421Z] =================================================================================================================== 00:09:15.153 [2024-10-11T20:32:18.421Z] Total : 15706.00 61.35 0.00 0.00 0.00 0.00 0.00 00:09:15.153 00:09:16.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.090 Nvme0n1 : 4.00 15814.50 61.78 0.00 0.00 0.00 0.00 0.00 00:09:16.090 [2024-10-11T20:32:19.358Z] =================================================================================================================== 00:09:16.090 [2024-10-11T20:32:19.358Z] Total : 15814.50 61.78 0.00 0.00 0.00 0.00 0.00 00:09:16.090 00:09:17.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.027 Nvme0n1 : 5.00 15855.60 61.94 0.00 0.00 0.00 0.00 0.00 00:09:17.027 [2024-10-11T20:32:20.295Z] =================================================================================================================== 00:09:17.027 [2024-10-11T20:32:20.295Z] Total : 15855.60 61.94 0.00 0.00 0.00 0.00 0.00 00:09:17.027 00:09:18.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.404 Nvme0n1 : 6.00 15893.33 62.08 0.00 0.00 0.00 0.00 0.00 00:09:18.404 [2024-10-11T20:32:21.672Z] =================================================================================================================== 00:09:18.404 [2024-10-11T20:32:21.672Z] Total : 15893.33 62.08 0.00 0.00 0.00 0.00 0.00 00:09:18.404 00:09:19.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.341 Nvme0n1 : 7.00 15928.86 62.22 0.00 0.00 0.00 0.00 0.00 00:09:19.341 [2024-10-11T20:32:22.609Z] =================================================================================================================== 00:09:19.341 [2024-10-11T20:32:22.609Z] Total : 15928.86 62.22 0.00 0.00 0.00 0.00 0.00 00:09:19.341 00:09:20.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.276 Nvme0n1 : 8.00 15980.25 62.42 0.00 0.00 0.00 0.00 0.00 00:09:20.276 [2024-10-11T20:32:23.544Z] =================================================================================================================== 00:09:20.276 [2024-10-11T20:32:23.544Z] Total : 15980.25 62.42 0.00 0.00 0.00 0.00 0.00 00:09:20.276 00:09:21.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.212 Nvme0n1 : 9.00 16012.44 62.55 0.00 0.00 0.00 0.00 0.00 00:09:21.212 [2024-10-11T20:32:24.480Z] =================================================================================================================== 00:09:21.212 [2024-10-11T20:32:24.480Z] Total : 16012.44 62.55 0.00 0.00 0.00 0.00 0.00 00:09:21.212 00:09:22.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.149 Nvme0n1 : 10.00 16038.70 62.65 0.00 0.00 0.00 0.00 0.00 00:09:22.149 [2024-10-11T20:32:25.417Z] =================================================================================================================== 00:09:22.149 [2024-10-11T20:32:25.417Z] Total : 16038.70 62.65 0.00 0.00 0.00 0.00 0.00 00:09:22.149 00:09:22.149 00:09:22.149 Latency(us) 00:09:22.149 [2024-10-11T20:32:25.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.149 Nvme0n1 : 10.01 16038.19 62.65 0.00 0.00 7976.24 2293.76 14951.92 00:09:22.149 [2024-10-11T20:32:25.417Z] =================================================================================================================== 00:09:22.149 [2024-10-11T20:32:25.417Z] Total : 16038.19 62.65 0.00 0.00 7976.24 2293.76 14951.92 00:09:22.149 { 00:09:22.149 "results": [ 00:09:22.149 { 00:09:22.149 "job": "Nvme0n1", 00:09:22.149 "core_mask": "0x2", 00:09:22.149 "workload": "randwrite", 00:09:22.149 "status": "finished", 00:09:22.149 "queue_depth": 128, 00:09:22.149 "io_size": 4096, 00:09:22.149 "runtime": 10.008301, 00:09:22.149 "iops": 16038.186701219318, 00:09:22.149 "mibps": 62.64916680163796, 00:09:22.149 "io_failed": 0, 00:09:22.149 "io_timeout": 0, 00:09:22.149 "avg_latency_us": 7976.243810420395, 00:09:22.149 "min_latency_us": 2293.76, 00:09:22.149 "max_latency_us": 14951.917037037038 00:09:22.149 } 00:09:22.149 ], 00:09:22.149 "core_count": 1 00:09:22.149 } 00:09:22.149 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 132793 00:09:22.149 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 132793 ']' 00:09:22.149 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 132793 00:09:22.149 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:22.149 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.149 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 132793 00:09:22.149 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:22.149 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:22.149 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 132793' 00:09:22.149 killing process with pid 132793 00:09:22.149 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 132793 00:09:22.149 Received shutdown signal, test time was about 10.000000 seconds 00:09:22.149 00:09:22.149 Latency(us) 00:09:22.149 [2024-10-11T20:32:25.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.149 [2024-10-11T20:32:25.417Z] =================================================================================================================== 00:09:22.149 [2024-10-11T20:32:25.417Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:22.149 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 132793 00:09:22.408 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.666 22:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:22.924 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84c065f7-0c89-490f-ba31-ae2c381f56f3 00:09:22.924 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:23.182 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:23.182 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:23.182 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:23.441 [2024-10-11 22:32:26.598394] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:23.441 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84c065f7-0c89-490f-ba31-ae2c381f56f3 00:09:23.441 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:23.442 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84c065f7-0c89-490f-ba31-ae2c381f56f3 00:09:23.442 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.442 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.442 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.442 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.442 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.442 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.442 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.442 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:23.442 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84c065f7-0c89-490f-ba31-ae2c381f56f3 00:09:23.700 request: 00:09:23.700 { 00:09:23.700 "uuid": "84c065f7-0c89-490f-ba31-ae2c381f56f3", 00:09:23.700 "method": "bdev_lvol_get_lvstores", 00:09:23.700 "req_id": 1 00:09:23.700 } 00:09:23.700 Got JSON-RPC error response 00:09:23.700 response: 00:09:23.700 { 00:09:23.700 "code": -19, 00:09:23.700 "message": "No such device" 00:09:23.700 } 00:09:23.700 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:23.700 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.700 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:23.700 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.700 22:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:23.959 aio_bdev 00:09:23.959 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aff01097-c1df-4fd8-a2b8-a1792ca93342 00:09:23.959 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=aff01097-c1df-4fd8-a2b8-a1792ca93342 00:09:23.959 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:23.959 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:23.959 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:23.959 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:23.959 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:24.217 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aff01097-c1df-4fd8-a2b8-a1792ca93342 -t 2000 00:09:24.475 [ 00:09:24.475 { 00:09:24.475 "name": "aff01097-c1df-4fd8-a2b8-a1792ca93342", 00:09:24.475 "aliases": [ 00:09:24.475 "lvs/lvol" 00:09:24.475 ], 00:09:24.475 "product_name": "Logical Volume", 00:09:24.475 "block_size": 4096, 00:09:24.475 "num_blocks": 38912, 00:09:24.475 "uuid": "aff01097-c1df-4fd8-a2b8-a1792ca93342", 00:09:24.475 "assigned_rate_limits": { 00:09:24.475 "rw_ios_per_sec": 0, 00:09:24.475 "rw_mbytes_per_sec": 0, 00:09:24.475 "r_mbytes_per_sec": 0, 00:09:24.475 "w_mbytes_per_sec": 0 00:09:24.475 }, 00:09:24.475 "claimed": false, 00:09:24.475 "zoned": false, 00:09:24.475 "supported_io_types": { 00:09:24.475 "read": true, 00:09:24.475 "write": true, 00:09:24.475 "unmap": true, 00:09:24.475 "flush": false, 00:09:24.475 "reset": true, 00:09:24.475 "nvme_admin": false, 00:09:24.475 "nvme_io": false, 00:09:24.475 "nvme_io_md": false, 00:09:24.475 "write_zeroes": true, 00:09:24.475 "zcopy": false, 00:09:24.475 "get_zone_info": false, 00:09:24.475 "zone_management": false, 00:09:24.475 "zone_append": false, 00:09:24.475 "compare": false, 00:09:24.475 "compare_and_write": false, 00:09:24.475 "abort": false, 00:09:24.475 "seek_hole": true, 00:09:24.475 "seek_data": true, 00:09:24.475 "copy": false, 00:09:24.475 "nvme_iov_md": false 00:09:24.475 }, 00:09:24.475 "driver_specific": { 00:09:24.475 "lvol": { 00:09:24.475 "lvol_store_uuid": "84c065f7-0c89-490f-ba31-ae2c381f56f3", 00:09:24.475 "base_bdev": "aio_bdev", 00:09:24.475 "thin_provision": false, 00:09:24.475 "num_allocated_clusters": 38, 00:09:24.475 "snapshot": false, 00:09:24.475 "clone": false, 00:09:24.475 "esnap_clone": false 00:09:24.475 } 00:09:24.475 } 00:09:24.475 } 00:09:24.475 ] 00:09:24.475 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:24.475 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84c065f7-0c89-490f-ba31-ae2c381f56f3 00:09:24.475 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:24.734 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:24.734 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84c065f7-0c89-490f-ba31-ae2c381f56f3 00:09:24.734 22:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:24.992 22:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:24.992 22:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aff01097-c1df-4fd8-a2b8-a1792ca93342 00:09:25.250 22:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 84c065f7-0c89-490f-ba31-ae2c381f56f3 00:09:25.820 22:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.820 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.079 00:09:26.079 real 0m17.759s 00:09:26.079 user 0m16.541s 00:09:26.079 sys 0m2.198s 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:26.079 ************************************ 00:09:26.079 END TEST lvs_grow_clean 00:09:26.079 ************************************ 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.079 ************************************ 00:09:26.079 START TEST lvs_grow_dirty 00:09:26.079 ************************************ 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.079 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.338 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:26.338 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:26.596 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:26.596 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:26.596 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:26.856 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:26.856 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:26.856 22:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 lvol 150 00:09:27.115 22:32:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=08423c67-c05e-4472-b028-dbac2ab54700 00:09:27.115 22:32:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:27.115 22:32:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:27.374 [2024-10-11 22:32:30.532024] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:27.374 [2024-10-11 22:32:30.532099] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:27.374 true 00:09:27.374 22:32:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:27.374 22:32:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:27.633 22:32:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:27.633 22:32:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:27.891 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 08423c67-c05e-4472-b028-dbac2ab54700 00:09:28.150 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:28.408 [2024-10-11 22:32:31.635350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.408 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.667 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=134941 00:09:28.667 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:28.667 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:28.667 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 134941 /var/tmp/bdevperf.sock 00:09:28.667 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 134941 ']' 00:09:28.667 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:28.667 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.667 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:28.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:28.667 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.667 22:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.925 [2024-10-11 22:32:31.966357] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:09:28.925 [2024-10-11 22:32:31.966425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134941 ] 00:09:28.926 [2024-10-11 22:32:32.023939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.926 [2024-10-11 22:32:32.068495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.183 22:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.183 22:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:29.183 22:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:29.441 Nvme0n1 00:09:29.441 22:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:29.700 [ 00:09:29.700 { 00:09:29.700 "name": "Nvme0n1", 00:09:29.700 "aliases": [ 00:09:29.700 "08423c67-c05e-4472-b028-dbac2ab54700" 00:09:29.700 ], 00:09:29.700 "product_name": "NVMe disk", 00:09:29.700 "block_size": 4096, 00:09:29.700 "num_blocks": 38912, 00:09:29.700 "uuid": "08423c67-c05e-4472-b028-dbac2ab54700", 00:09:29.700 "numa_id": 0, 00:09:29.700 "assigned_rate_limits": { 00:09:29.700 "rw_ios_per_sec": 0, 00:09:29.700 "rw_mbytes_per_sec": 0, 00:09:29.700 "r_mbytes_per_sec": 0, 00:09:29.700 "w_mbytes_per_sec": 0 00:09:29.700 }, 00:09:29.700 "claimed": false, 00:09:29.700 "zoned": false, 00:09:29.700 "supported_io_types": { 00:09:29.700 "read": true, 00:09:29.700 "write": true, 00:09:29.700 "unmap": true, 00:09:29.700 "flush": true, 00:09:29.700 "reset": true, 00:09:29.700 "nvme_admin": true, 00:09:29.700 "nvme_io": true, 00:09:29.700 "nvme_io_md": false, 00:09:29.700 "write_zeroes": true, 00:09:29.700 "zcopy": false, 00:09:29.700 "get_zone_info": false, 00:09:29.700 "zone_management": false, 00:09:29.700 "zone_append": false, 00:09:29.700 "compare": true, 00:09:29.700 "compare_and_write": true, 00:09:29.700 "abort": true, 00:09:29.700 "seek_hole": false, 00:09:29.700 "seek_data": false, 00:09:29.700 "copy": true, 00:09:29.700 "nvme_iov_md": false 00:09:29.700 }, 00:09:29.700 "memory_domains": [ 00:09:29.700 { 00:09:29.700 "dma_device_id": "system", 00:09:29.700 "dma_device_type": 1 00:09:29.700 } 00:09:29.700 ], 00:09:29.700 "driver_specific": { 00:09:29.700 "nvme": [ 00:09:29.700 { 00:09:29.700 "trid": { 00:09:29.700 "trtype": "TCP", 00:09:29.700 "adrfam": "IPv4", 00:09:29.700 "traddr": "10.0.0.2", 00:09:29.700 "trsvcid": "4420", 00:09:29.700 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:29.700 }, 00:09:29.700 "ctrlr_data": { 00:09:29.700 "cntlid": 1, 00:09:29.700 "vendor_id": "0x8086", 00:09:29.700 "model_number": "SPDK bdev Controller", 00:09:29.700 "serial_number": "SPDK0", 00:09:29.700 "firmware_revision": "25.01", 00:09:29.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:29.700 "oacs": { 00:09:29.700 "security": 0, 00:09:29.700 "format": 0, 00:09:29.700 "firmware": 0, 00:09:29.700 "ns_manage": 0 00:09:29.700 }, 00:09:29.700 "multi_ctrlr": true, 00:09:29.700 "ana_reporting": false 00:09:29.700 }, 00:09:29.700 "vs": { 00:09:29.700 "nvme_version": "1.3" 00:09:29.700 }, 00:09:29.700 "ns_data": { 00:09:29.700 "id": 1, 00:09:29.700 "can_share": true 00:09:29.700 } 00:09:29.700 } 00:09:29.700 ], 00:09:29.700 "mp_policy": "active_passive" 00:09:29.700 } 00:09:29.700 } 00:09:29.700 ] 00:09:29.700 22:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=135077 00:09:29.700 22:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:29.700 22:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:29.959 Running I/O for 10 seconds... 00:09:30.894 Latency(us) 00:09:30.894 [2024-10-11T20:32:34.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.894 Nvme0n1 : 1.00 15621.00 61.02 0.00 0.00 0.00 0.00 0.00 00:09:30.894 [2024-10-11T20:32:34.162Z] =================================================================================================================== 00:09:30.894 [2024-10-11T20:32:34.162Z] Total : 15621.00 61.02 0.00 0.00 0.00 0.00 0.00 00:09:30.894 00:09:31.830 22:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:31.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.830 Nvme0n1 : 2.00 15780.50 61.64 0.00 0.00 0.00 0.00 0.00 00:09:31.830 [2024-10-11T20:32:35.098Z] =================================================================================================================== 00:09:31.830 [2024-10-11T20:32:35.098Z] Total : 15780.50 61.64 0.00 0.00 0.00 0.00 0.00 00:09:31.830 00:09:32.089 true 00:09:32.089 22:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:32.089 22:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:32.347 22:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:32.347 22:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:32.347 22:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 135077 00:09:32.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.914 Nvme0n1 : 3.00 15854.33 61.93 0.00 0.00 0.00 0.00 0.00 00:09:32.914 [2024-10-11T20:32:36.182Z] =================================================================================================================== 00:09:32.914 [2024-10-11T20:32:36.182Z] Total : 15854.33 61.93 0.00 0.00 0.00 0.00 0.00 00:09:32.914 00:09:33.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.849 Nvme0n1 : 4.00 15954.75 62.32 0.00 0.00 0.00 0.00 0.00 00:09:33.849 [2024-10-11T20:32:37.117Z] =================================================================================================================== 00:09:33.849 [2024-10-11T20:32:37.117Z] Total : 15954.75 62.32 0.00 0.00 0.00 0.00 0.00 00:09:33.849 00:09:35.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.225 Nvme0n1 : 5.00 16040.40 62.66 0.00 0.00 0.00 0.00 0.00 00:09:35.225 [2024-10-11T20:32:38.493Z] =================================================================================================================== 00:09:35.225 [2024-10-11T20:32:38.493Z] Total : 16040.40 62.66 0.00 0.00 0.00 0.00 0.00 00:09:35.225 00:09:36.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.161 Nvme0n1 : 6.00 16097.50 62.88 0.00 0.00 0.00 0.00 0.00 00:09:36.161 [2024-10-11T20:32:39.429Z] =================================================================================================================== 00:09:36.161 [2024-10-11T20:32:39.430Z] Total : 16097.50 62.88 0.00 0.00 0.00 0.00 0.00 00:09:36.162 00:09:37.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.098 Nvme0n1 : 7.00 16129.29 63.01 0.00 0.00 0.00 0.00 0.00 00:09:37.098 [2024-10-11T20:32:40.366Z] =================================================================================================================== 00:09:37.098 [2024-10-11T20:32:40.366Z] Total : 16129.29 63.01 0.00 0.00 0.00 0.00 0.00 00:09:37.098 00:09:38.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.034 Nvme0n1 : 8.00 16168.88 63.16 0.00 0.00 0.00 0.00 0.00 00:09:38.034 [2024-10-11T20:32:41.302Z] =================================================================================================================== 00:09:38.034 [2024-10-11T20:32:41.302Z] Total : 16168.88 63.16 0.00 0.00 0.00 0.00 0.00 00:09:38.034 00:09:38.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.969 Nvme0n1 : 9.00 16192.67 63.25 0.00 0.00 0.00 0.00 0.00 00:09:38.969 [2024-10-11T20:32:42.237Z] =================================================================================================================== 00:09:38.969 [2024-10-11T20:32:42.237Z] Total : 16192.67 63.25 0.00 0.00 0.00 0.00 0.00 00:09:38.969 00:09:39.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.905 Nvme0n1 : 10.00 16211.70 63.33 0.00 0.00 0.00 0.00 0.00 00:09:39.905 [2024-10-11T20:32:43.173Z] =================================================================================================================== 00:09:39.905 [2024-10-11T20:32:43.173Z] Total : 16211.70 63.33 0.00 0.00 0.00 0.00 0.00 00:09:39.905 00:09:39.905 00:09:39.905 Latency(us) 00:09:39.905 [2024-10-11T20:32:43.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.905 Nvme0n1 : 10.01 16213.48 63.33 0.00 0.00 7890.15 5218.61 23301.69 00:09:39.905 [2024-10-11T20:32:43.173Z] =================================================================================================================== 00:09:39.905 [2024-10-11T20:32:43.173Z] Total : 16213.48 63.33 0.00 0.00 7890.15 5218.61 23301.69 00:09:39.905 { 00:09:39.905 "results": [ 00:09:39.905 { 00:09:39.905 "job": "Nvme0n1", 00:09:39.905 "core_mask": "0x2", 00:09:39.905 "workload": "randwrite", 00:09:39.905 "status": "finished", 00:09:39.905 "queue_depth": 128, 00:09:39.905 "io_size": 4096, 00:09:39.905 "runtime": 10.006795, 00:09:39.905 "iops": 16213.482938343395, 00:09:39.905 "mibps": 63.33391772790389, 00:09:39.905 "io_failed": 0, 00:09:39.905 "io_timeout": 0, 00:09:39.905 "avg_latency_us": 7890.149228087837, 00:09:39.905 "min_latency_us": 5218.607407407408, 00:09:39.905 "max_latency_us": 23301.68888888889 00:09:39.905 } 00:09:39.905 ], 00:09:39.905 "core_count": 1 00:09:39.905 } 00:09:39.905 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 134941 00:09:39.905 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 134941 ']' 00:09:39.905 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 134941 00:09:39.905 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:39.905 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.905 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 134941 00:09:39.905 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:39.905 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:39.905 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 134941' 00:09:39.905 killing process with pid 134941 00:09:39.905 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 134941 00:09:39.905 Received shutdown signal, test time was about 10.000000 seconds 00:09:39.905 00:09:39.905 Latency(us) 00:09:39.905 [2024-10-11T20:32:43.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.905 [2024-10-11T20:32:43.173Z] =================================================================================================================== 00:09:39.905 [2024-10-11T20:32:43.173Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:39.905 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 134941 00:09:40.163 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.421 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:40.680 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:40.680 22:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 132418 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 132418 00:09:40.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 132418 Killed "${NVMF_APP[@]}" "$@" 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=136413 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 136413 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 136413 ']' 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.939 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:41.198 [2024-10-11 22:32:44.255422] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:09:41.198 [2024-10-11 22:32:44.255488] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.198 [2024-10-11 22:32:44.316365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.198 [2024-10-11 22:32:44.357451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.198 [2024-10-11 22:32:44.357507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.198 [2024-10-11 22:32:44.357529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.198 [2024-10-11 22:32:44.357541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.198 [2024-10-11 22:32:44.357557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.198 [2024-10-11 22:32:44.358135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.198 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.198 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:41.198 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:41.198 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:41.198 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:41.456 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.456 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.715 [2024-10-11 22:32:44.746089] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:41.715 [2024-10-11 22:32:44.746213] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:41.715 [2024-10-11 22:32:44.746259] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:41.715 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:41.715 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 08423c67-c05e-4472-b028-dbac2ab54700 00:09:41.715 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=08423c67-c05e-4472-b028-dbac2ab54700 00:09:41.715 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.715 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:41.715 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.715 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.715 22:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:41.973 22:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 08423c67-c05e-4472-b028-dbac2ab54700 -t 2000 00:09:42.231 [ 00:09:42.231 { 00:09:42.231 "name": "08423c67-c05e-4472-b028-dbac2ab54700", 00:09:42.231 "aliases": [ 00:09:42.231 "lvs/lvol" 00:09:42.231 ], 00:09:42.231 "product_name": "Logical Volume", 00:09:42.231 "block_size": 4096, 00:09:42.231 "num_blocks": 38912, 00:09:42.231 "uuid": "08423c67-c05e-4472-b028-dbac2ab54700", 00:09:42.231 "assigned_rate_limits": { 00:09:42.231 "rw_ios_per_sec": 0, 00:09:42.231 "rw_mbytes_per_sec": 0, 00:09:42.231 "r_mbytes_per_sec": 0, 00:09:42.231 "w_mbytes_per_sec": 0 00:09:42.231 }, 00:09:42.231 "claimed": false, 00:09:42.231 "zoned": false, 00:09:42.231 "supported_io_types": { 00:09:42.231 "read": true, 00:09:42.231 "write": true, 00:09:42.231 "unmap": true, 00:09:42.231 "flush": false, 00:09:42.231 "reset": true, 00:09:42.231 "nvme_admin": false, 00:09:42.231 "nvme_io": false, 00:09:42.231 "nvme_io_md": false, 00:09:42.231 "write_zeroes": true, 00:09:42.231 "zcopy": false, 00:09:42.231 "get_zone_info": false, 00:09:42.231 "zone_management": false, 00:09:42.231 "zone_append": false, 00:09:42.231 "compare": false, 00:09:42.231 "compare_and_write": false, 00:09:42.231 "abort": false, 00:09:42.231 "seek_hole": true, 00:09:42.231 "seek_data": true, 00:09:42.231 "copy": false, 00:09:42.231 "nvme_iov_md": false 00:09:42.231 }, 00:09:42.231 "driver_specific": { 00:09:42.231 "lvol": { 00:09:42.231 "lvol_store_uuid": "2ec0a17b-f831-40f7-af43-35415d5b3b30", 00:09:42.231 "base_bdev": "aio_bdev", 00:09:42.231 "thin_provision": false, 00:09:42.231 "num_allocated_clusters": 38, 00:09:42.231 "snapshot": false, 00:09:42.231 "clone": false, 00:09:42.231 "esnap_clone": false 00:09:42.231 } 00:09:42.231 } 00:09:42.231 } 00:09:42.231 ] 00:09:42.232 22:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:42.232 22:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:42.232 22:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:42.489 22:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:42.489 22:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:42.489 22:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:42.748 22:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:42.748 22:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.006 [2024-10-11 22:32:46.087865] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:43.006 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:43.006 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:43.006 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:43.006 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.006 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.006 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.006 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.006 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.006 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.006 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.006 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:43.006 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:43.266 request: 00:09:43.266 { 00:09:43.266 "uuid": "2ec0a17b-f831-40f7-af43-35415d5b3b30", 00:09:43.266 "method": "bdev_lvol_get_lvstores", 00:09:43.266 "req_id": 1 00:09:43.266 } 00:09:43.266 Got JSON-RPC error response 00:09:43.266 response: 00:09:43.266 { 00:09:43.266 "code": -19, 00:09:43.266 "message": "No such device" 00:09:43.266 } 00:09:43.266 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:43.266 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:43.266 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:43.266 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:43.266 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:43.525 aio_bdev 00:09:43.525 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 08423c67-c05e-4472-b028-dbac2ab54700 00:09:43.525 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=08423c67-c05e-4472-b028-dbac2ab54700 00:09:43.525 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.525 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:43.525 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.525 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.525 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:43.783 22:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 08423c67-c05e-4472-b028-dbac2ab54700 -t 2000 00:09:44.041 [ 00:09:44.041 { 00:09:44.041 "name": "08423c67-c05e-4472-b028-dbac2ab54700", 00:09:44.041 "aliases": [ 00:09:44.041 "lvs/lvol" 00:09:44.041 ], 00:09:44.041 "product_name": "Logical Volume", 00:09:44.041 "block_size": 4096, 00:09:44.041 "num_blocks": 38912, 00:09:44.042 "uuid": "08423c67-c05e-4472-b028-dbac2ab54700", 00:09:44.042 "assigned_rate_limits": { 00:09:44.042 "rw_ios_per_sec": 0, 00:09:44.042 "rw_mbytes_per_sec": 0, 00:09:44.042 "r_mbytes_per_sec": 0, 00:09:44.042 "w_mbytes_per_sec": 0 00:09:44.042 }, 00:09:44.042 "claimed": false, 00:09:44.042 "zoned": false, 00:09:44.042 "supported_io_types": { 00:09:44.042 "read": true, 00:09:44.042 "write": true, 00:09:44.042 "unmap": true, 00:09:44.042 "flush": false, 00:09:44.042 "reset": true, 00:09:44.042 "nvme_admin": false, 00:09:44.042 "nvme_io": false, 00:09:44.042 "nvme_io_md": false, 00:09:44.042 "write_zeroes": true, 00:09:44.042 "zcopy": false, 00:09:44.042 "get_zone_info": false, 00:09:44.042 "zone_management": false, 00:09:44.042 "zone_append": false, 00:09:44.042 "compare": false, 00:09:44.042 "compare_and_write": false, 00:09:44.042 "abort": false, 00:09:44.042 "seek_hole": true, 00:09:44.042 "seek_data": true, 00:09:44.042 "copy": false, 00:09:44.042 "nvme_iov_md": false 00:09:44.042 }, 00:09:44.042 "driver_specific": { 00:09:44.042 "lvol": { 00:09:44.042 "lvol_store_uuid": "2ec0a17b-f831-40f7-af43-35415d5b3b30", 00:09:44.042 "base_bdev": "aio_bdev", 00:09:44.042 "thin_provision": false, 00:09:44.042 "num_allocated_clusters": 38, 00:09:44.042 "snapshot": false, 00:09:44.042 "clone": false, 00:09:44.042 "esnap_clone": false 00:09:44.042 } 00:09:44.042 } 00:09:44.042 } 00:09:44.042 ] 00:09:44.042 22:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:44.042 22:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:44.042 22:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:44.300 22:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:44.300 22:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:44.300 22:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:44.559 22:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:44.559 22:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 08423c67-c05e-4472-b028-dbac2ab54700 00:09:44.817 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ec0a17b-f831-40f7-af43-35415d5b3b30 00:09:45.076 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.334 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.334 00:09:45.334 real 0m19.422s 00:09:45.334 user 0m49.463s 00:09:45.334 sys 0m4.465s 00:09:45.334 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.334 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:45.334 ************************************ 00:09:45.334 END TEST lvs_grow_dirty 00:09:45.334 ************************************ 00:09:45.334 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:45.334 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:45.334 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:45.334 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:45.334 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:45.334 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:45.335 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:45.335 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:45.335 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:45.593 nvmf_trace.0 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.593 rmmod nvme_tcp 00:09:45.593 rmmod nvme_fabrics 00:09:45.593 rmmod nvme_keyring 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 136413 ']' 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 136413 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 136413 ']' 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 136413 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 136413 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 136413' 00:09:45.593 killing process with pid 136413 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 136413 00:09:45.593 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 136413 00:09:45.854 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:45.854 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:45.854 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:45.854 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:45.854 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:09:45.854 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:45.854 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:09:45.854 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.854 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.854 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.854 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.854 22:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.767 22:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.767 00:09:47.767 real 0m42.748s 00:09:47.767 user 1m11.960s 00:09:47.767 sys 0m8.744s 00:09:47.767 22:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.767 22:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:47.767 ************************************ 00:09:47.767 END TEST nvmf_lvs_grow 00:09:47.767 ************************************ 00:09:47.767 22:32:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:47.767 22:32:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:47.767 22:32:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.767 22:32:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.767 ************************************ 00:09:47.767 START TEST nvmf_bdev_io_wait 00:09:47.767 ************************************ 00:09:47.767 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:48.026 * Looking for test storage... 00:09:48.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:48.026 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:48.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.027 --rc genhtml_branch_coverage=1 00:09:48.027 --rc genhtml_function_coverage=1 00:09:48.027 --rc genhtml_legend=1 00:09:48.027 --rc geninfo_all_blocks=1 00:09:48.027 --rc geninfo_unexecuted_blocks=1 00:09:48.027 00:09:48.027 ' 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:48.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.027 --rc genhtml_branch_coverage=1 00:09:48.027 --rc genhtml_function_coverage=1 00:09:48.027 --rc genhtml_legend=1 00:09:48.027 --rc geninfo_all_blocks=1 00:09:48.027 --rc geninfo_unexecuted_blocks=1 00:09:48.027 00:09:48.027 ' 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:48.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.027 --rc genhtml_branch_coverage=1 00:09:48.027 --rc genhtml_function_coverage=1 00:09:48.027 --rc genhtml_legend=1 00:09:48.027 --rc geninfo_all_blocks=1 00:09:48.027 --rc geninfo_unexecuted_blocks=1 00:09:48.027 00:09:48.027 ' 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:48.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.027 --rc genhtml_branch_coverage=1 00:09:48.027 --rc genhtml_function_coverage=1 00:09:48.027 --rc genhtml_legend=1 00:09:48.027 --rc geninfo_all_blocks=1 00:09:48.027 --rc geninfo_unexecuted_blocks=1 00:09:48.027 00:09:48.027 ' 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:48.027 22:32:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.564 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:50.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:50.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:50.565 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:50.565 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:50.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:09:50.565 00:09:50.565 --- 10.0.0.2 ping statistics --- 00:09:50.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.565 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:09:50.565 00:09:50.565 --- 10.0.0.1 ping statistics --- 00:09:50.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.565 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=138949 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 138949 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 138949 ']' 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.565 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.565 [2024-10-11 22:32:53.731407] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:09:50.565 [2024-10-11 22:32:53.731499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.565 [2024-10-11 22:32:53.798309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.824 [2024-10-11 22:32:53.846448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.824 [2024-10-11 22:32:53.846496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.824 [2024-10-11 22:32:53.846517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.824 [2024-10-11 22:32:53.846527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.824 [2024-10-11 22:32:53.846535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.824 [2024-10-11 22:32:53.848095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.824 [2024-10-11 22:32:53.848205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.824 [2024-10-11 22:32:53.848305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.824 [2024-10-11 22:32:53.848312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.824 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.824 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.824 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:50.824 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.824 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.824 [2024-10-11 22:32:54.072117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.824 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.824 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:50.824 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.824 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.084 Malloc0 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.084 [2024-10-11 22:32:54.125173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=139098 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=139100 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:51.084 { 00:09:51.084 "params": { 00:09:51.084 "name": "Nvme$subsystem", 00:09:51.084 "trtype": "$TEST_TRANSPORT", 00:09:51.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.084 "adrfam": "ipv4", 00:09:51.084 "trsvcid": "$NVMF_PORT", 00:09:51.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.084 "hdgst": ${hdgst:-false}, 00:09:51.084 "ddgst": ${ddgst:-false} 00:09:51.084 }, 00:09:51.084 "method": "bdev_nvme_attach_controller" 00:09:51.084 } 00:09:51.084 EOF 00:09:51.084 )") 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=139102 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:51.084 { 00:09:51.084 "params": { 00:09:51.084 "name": "Nvme$subsystem", 00:09:51.084 "trtype": "$TEST_TRANSPORT", 00:09:51.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.084 "adrfam": "ipv4", 00:09:51.084 "trsvcid": "$NVMF_PORT", 00:09:51.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.084 "hdgst": ${hdgst:-false}, 00:09:51.084 "ddgst": ${ddgst:-false} 00:09:51.084 }, 00:09:51.084 "method": "bdev_nvme_attach_controller" 00:09:51.084 } 00:09:51.084 EOF 00:09:51.084 )") 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=139105 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:51.084 { 00:09:51.084 "params": { 00:09:51.084 "name": "Nvme$subsystem", 00:09:51.084 "trtype": "$TEST_TRANSPORT", 00:09:51.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.084 "adrfam": "ipv4", 00:09:51.084 "trsvcid": "$NVMF_PORT", 00:09:51.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.084 "hdgst": ${hdgst:-false}, 00:09:51.084 "ddgst": ${ddgst:-false} 00:09:51.084 }, 00:09:51.084 "method": "bdev_nvme_attach_controller" 00:09:51.084 } 00:09:51.084 EOF 00:09:51.084 )") 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:51.084 { 00:09:51.084 "params": { 00:09:51.084 "name": "Nvme$subsystem", 00:09:51.084 "trtype": "$TEST_TRANSPORT", 00:09:51.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.084 "adrfam": "ipv4", 00:09:51.084 "trsvcid": "$NVMF_PORT", 00:09:51.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.084 "hdgst": ${hdgst:-false}, 00:09:51.084 "ddgst": ${ddgst:-false} 00:09:51.084 }, 00:09:51.084 "method": "bdev_nvme_attach_controller" 00:09:51.084 } 00:09:51.084 EOF 00:09:51.084 )") 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 139098 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:51.084 "params": { 00:09:51.084 "name": "Nvme1", 00:09:51.084 "trtype": "tcp", 00:09:51.084 "traddr": "10.0.0.2", 00:09:51.084 "adrfam": "ipv4", 00:09:51.084 "trsvcid": "4420", 00:09:51.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.084 "hdgst": false, 00:09:51.084 "ddgst": false 00:09:51.084 }, 00:09:51.084 "method": "bdev_nvme_attach_controller" 00:09:51.084 }' 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:51.084 "params": { 00:09:51.084 "name": "Nvme1", 00:09:51.084 "trtype": "tcp", 00:09:51.084 "traddr": "10.0.0.2", 00:09:51.084 "adrfam": "ipv4", 00:09:51.084 "trsvcid": "4420", 00:09:51.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.084 "hdgst": false, 00:09:51.084 "ddgst": false 00:09:51.084 }, 00:09:51.084 "method": "bdev_nvme_attach_controller" 00:09:51.084 }' 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:51.084 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:51.084 "params": { 00:09:51.084 "name": "Nvme1", 00:09:51.084 "trtype": "tcp", 00:09:51.084 "traddr": "10.0.0.2", 00:09:51.084 "adrfam": "ipv4", 00:09:51.084 "trsvcid": "4420", 00:09:51.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.084 "hdgst": false, 00:09:51.084 "ddgst": false 00:09:51.084 }, 00:09:51.084 "method": "bdev_nvme_attach_controller" 00:09:51.084 }' 00:09:51.085 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:51.085 22:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:51.085 "params": { 00:09:51.085 "name": "Nvme1", 00:09:51.085 "trtype": "tcp", 00:09:51.085 "traddr": "10.0.0.2", 00:09:51.085 "adrfam": "ipv4", 00:09:51.085 "trsvcid": "4420", 00:09:51.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.085 "hdgst": false, 00:09:51.085 "ddgst": false 00:09:51.085 }, 00:09:51.085 "method": "bdev_nvme_attach_controller" 00:09:51.085 }' 00:09:51.085 [2024-10-11 22:32:54.175268] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:09:51.085 [2024-10-11 22:32:54.175268] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:09:51.085 [2024-10-11 22:32:54.175374] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-11 22:32:54.175373] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:51.085 --proc-type=auto ] 00:09:51.085 [2024-10-11 22:32:54.175764] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:09:51.085 [2024-10-11 22:32:54.175763] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:09:51.085 [2024-10-11 22:32:54.175841] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:51.085 [2024-10-11 22:32:54.175840] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:51.344 [2024-10-11 22:32:54.352294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.344 [2024-10-11 22:32:54.395059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:51.344 [2024-10-11 22:32:54.454837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.344 [2024-10-11 22:32:54.498091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:51.344 [2024-10-11 22:32:54.526139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.344 [2024-10-11 22:32:54.563037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:51.344 [2024-10-11 22:32:54.600164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.602 [2024-10-11 22:32:54.639760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:51.602 Running I/O for 1 seconds... 00:09:51.602 Running I/O for 1 seconds... 00:09:51.602 Running I/O for 1 seconds... 00:09:51.602 Running I/O for 1 seconds... 00:09:52.537 5933.00 IOPS, 23.18 MiB/s 00:09:52.537 Latency(us) 00:09:52.537 [2024-10-11T20:32:55.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.537 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:52.537 Nvme1n1 : 1.02 5953.47 23.26 0.00 0.00 21264.92 8932.31 32428.18 00:09:52.537 [2024-10-11T20:32:55.805Z] =================================================================================================================== 00:09:52.537 [2024-10-11T20:32:55.805Z] Total : 5953.47 23.26 0.00 0.00 21264.92 8932.31 32428.18 00:09:52.537 181160.00 IOPS, 707.66 MiB/s 00:09:52.537 Latency(us) 00:09:52.537 [2024-10-11T20:32:55.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.537 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:52.537 Nvme1n1 : 1.00 180782.57 706.18 0.00 0.00 704.14 338.30 2063.17 00:09:52.537 [2024-10-11T20:32:55.805Z] =================================================================================================================== 00:09:52.537 [2024-10-11T20:32:55.805Z] Total : 180782.57 706.18 0.00 0.00 704.14 338.30 2063.17 00:09:52.537 5815.00 IOPS, 22.71 MiB/s 00:09:52.537 Latency(us) 00:09:52.537 [2024-10-11T20:32:55.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.537 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:52.537 Nvme1n1 : 1.01 5915.90 23.11 0.00 0.00 21559.10 5461.33 39224.51 00:09:52.537 [2024-10-11T20:32:55.805Z] =================================================================================================================== 00:09:52.537 [2024-10-11T20:32:55.805Z] Total : 5915.90 23.11 0.00 0.00 21559.10 5461.33 39224.51 00:09:52.807 9630.00 IOPS, 37.62 MiB/s 00:09:52.807 Latency(us) 00:09:52.807 [2024-10-11T20:32:56.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.808 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:52.808 Nvme1n1 : 1.01 9700.07 37.89 0.00 0.00 13147.61 4927.34 24855.13 00:09:52.808 [2024-10-11T20:32:56.076Z] =================================================================================================================== 00:09:52.808 [2024-10-11T20:32:56.076Z] Total : 9700.07 37.89 0.00 0.00 13147.61 4927.34 24855.13 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 139100 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 139102 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 139105 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.808 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.808 rmmod nvme_tcp 00:09:52.808 rmmod nvme_fabrics 00:09:52.808 rmmod nvme_keyring 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 138949 ']' 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 138949 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 138949 ']' 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 138949 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 138949 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 138949' 00:09:52.808 killing process with pid 138949 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 138949 00:09:52.808 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 138949 00:09:53.071 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:53.071 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:53.071 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:53.071 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:53.071 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:09:53.071 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:53.071 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:09:53.071 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.071 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.071 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.071 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.071 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.684 00:09:55.684 real 0m7.288s 00:09:55.684 user 0m15.234s 00:09:55.684 sys 0m3.529s 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.684 ************************************ 00:09:55.684 END TEST nvmf_bdev_io_wait 00:09:55.684 ************************************ 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.684 ************************************ 00:09:55.684 START TEST nvmf_queue_depth 00:09:55.684 ************************************ 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:55.684 * Looking for test storage... 00:09:55.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:55.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.684 --rc genhtml_branch_coverage=1 00:09:55.684 --rc genhtml_function_coverage=1 00:09:55.684 --rc genhtml_legend=1 00:09:55.684 --rc geninfo_all_blocks=1 00:09:55.684 --rc geninfo_unexecuted_blocks=1 00:09:55.684 00:09:55.684 ' 00:09:55.684 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:55.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.684 --rc genhtml_branch_coverage=1 00:09:55.684 --rc genhtml_function_coverage=1 00:09:55.684 --rc genhtml_legend=1 00:09:55.684 --rc geninfo_all_blocks=1 00:09:55.684 --rc geninfo_unexecuted_blocks=1 00:09:55.684 00:09:55.685 ' 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:55.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.685 --rc genhtml_branch_coverage=1 00:09:55.685 --rc genhtml_function_coverage=1 00:09:55.685 --rc genhtml_legend=1 00:09:55.685 --rc geninfo_all_blocks=1 00:09:55.685 --rc geninfo_unexecuted_blocks=1 00:09:55.685 00:09:55.685 ' 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:55.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.685 --rc genhtml_branch_coverage=1 00:09:55.685 --rc genhtml_function_coverage=1 00:09:55.685 --rc genhtml_legend=1 00:09:55.685 --rc geninfo_all_blocks=1 00:09:55.685 --rc geninfo_unexecuted_blocks=1 00:09:55.685 00:09:55.685 ' 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.685 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.703 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.703 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.703 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.703 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.703 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.703 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.703 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.703 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.703 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:57.704 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:57.704 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:57.704 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:57.704 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:09:57.704 00:09:57.704 --- 10.0.0.2 ping statistics --- 00:09:57.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.704 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:09:57.704 00:09:57.704 --- 10.0.0.1 ping statistics --- 00:09:57.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.704 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=141392 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:57.704 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 141392 00:09:57.705 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 141392 ']' 00:09:57.705 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.705 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.705 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.705 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.705 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.705 [2024-10-11 22:33:00.836052] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:09:57.705 [2024-10-11 22:33:00.836128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.705 [2024-10-11 22:33:00.903165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.705 [2024-10-11 22:33:00.948127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.705 [2024-10-11 22:33:00.948184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.705 [2024-10-11 22:33:00.948208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.705 [2024-10-11 22:33:00.948219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.705 [2024-10-11 22:33:00.948228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.705 [2024-10-11 22:33:00.948794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.973 [2024-10-11 22:33:01.091653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.973 Malloc0 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.973 [2024-10-11 22:33:01.137894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=141440 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 141440 /var/tmp/bdevperf.sock 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 141440 ']' 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:57.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.973 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.973 [2024-10-11 22:33:01.185222] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:09:57.974 [2024-10-11 22:33:01.185284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141440 ] 00:09:58.240 [2024-10-11 22:33:01.244687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.240 [2024-10-11 22:33:01.293507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.240 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.240 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:58.240 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:58.240 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.240 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.507 NVMe0n1 00:09:58.507 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.507 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:58.507 Running I/O for 10 seconds... 00:10:00.473 8192.00 IOPS, 32.00 MiB/s [2024-10-11T20:33:04.712Z] 8194.00 IOPS, 32.01 MiB/s [2024-10-11T20:33:05.691Z] 8293.33 IOPS, 32.40 MiB/s [2024-10-11T20:33:07.119Z] 8441.50 IOPS, 32.97 MiB/s [2024-10-11T20:33:07.733Z] 8450.00 IOPS, 33.01 MiB/s [2024-10-11T20:33:08.718Z] 8520.67 IOPS, 33.28 MiB/s [2024-10-11T20:33:09.702Z] 8509.14 IOPS, 33.24 MiB/s [2024-10-11T20:33:11.141Z] 8564.88 IOPS, 33.46 MiB/s [2024-10-11T20:33:11.732Z] 8552.33 IOPS, 33.41 MiB/s [2024-10-11T20:33:11.991Z] 8589.20 IOPS, 33.55 MiB/s 00:10:08.723 Latency(us) 00:10:08.723 [2024-10-11T20:33:11.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.723 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:08.723 Verification LBA range: start 0x0 length 0x4000 00:10:08.723 NVMe0n1 : 10.09 8612.53 33.64 0.00 0.00 118430.11 22233.69 73011.96 00:10:08.723 [2024-10-11T20:33:11.991Z] =================================================================================================================== 00:10:08.723 [2024-10-11T20:33:11.991Z] Total : 8612.53 33.64 0.00 0.00 118430.11 22233.69 73011.96 00:10:08.723 { 00:10:08.723 "results": [ 00:10:08.723 { 00:10:08.723 "job": "NVMe0n1", 00:10:08.723 "core_mask": "0x1", 00:10:08.723 "workload": "verify", 00:10:08.723 "status": "finished", 00:10:08.723 "verify_range": { 00:10:08.723 "start": 0, 00:10:08.723 "length": 16384 00:10:08.723 }, 00:10:08.723 "queue_depth": 1024, 00:10:08.723 "io_size": 4096, 00:10:08.723 "runtime": 10.091805, 00:10:08.723 "iops": 8612.532644061197, 00:10:08.723 "mibps": 33.64270564086405, 00:10:08.723 "io_failed": 0, 00:10:08.723 "io_timeout": 0, 00:10:08.723 "avg_latency_us": 118430.10567819419, 00:10:08.723 "min_latency_us": 22233.694814814815, 00:10:08.723 "max_latency_us": 73011.95851851851 00:10:08.723 } 00:10:08.723 ], 00:10:08.723 "core_count": 1 00:10:08.723 } 00:10:08.723 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 141440 00:10:08.723 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 141440 ']' 00:10:08.723 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 141440 00:10:08.723 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:08.723 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.723 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 141440 00:10:08.723 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.723 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.723 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 141440' 00:10:08.723 killing process with pid 141440 00:10:08.723 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 141440 00:10:08.723 Received shutdown signal, test time was about 10.000000 seconds 00:10:08.723 00:10:08.723 Latency(us) 00:10:08.723 [2024-10-11T20:33:11.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.723 [2024-10-11T20:33:11.991Z] =================================================================================================================== 00:10:08.723 [2024-10-11T20:33:11.991Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:08.723 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 141440 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.981 rmmod nvme_tcp 00:10:08.981 rmmod nvme_fabrics 00:10:08.981 rmmod nvme_keyring 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 141392 ']' 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 141392 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 141392 ']' 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 141392 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 141392 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 141392' 00:10:08.981 killing process with pid 141392 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 141392 00:10:08.981 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 141392 00:10:09.241 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:09.241 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:09.241 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:09.241 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:09.241 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:10:09.241 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:09.241 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:10:09.241 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.241 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.241 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.241 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.241 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.152 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.152 00:10:11.152 real 0m16.020s 00:10:11.152 user 0m22.458s 00:10:11.152 sys 0m3.116s 00:10:11.152 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.152 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.152 ************************************ 00:10:11.152 END TEST nvmf_queue_depth 00:10:11.152 ************************************ 00:10:11.152 22:33:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:11.152 22:33:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:11.152 22:33:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.152 22:33:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.411 ************************************ 00:10:11.411 START TEST nvmf_target_multipath 00:10:11.411 ************************************ 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:11.411 * Looking for test storage... 00:10:11.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.411 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:11.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.412 --rc genhtml_branch_coverage=1 00:10:11.412 --rc genhtml_function_coverage=1 00:10:11.412 --rc genhtml_legend=1 00:10:11.412 --rc geninfo_all_blocks=1 00:10:11.412 --rc geninfo_unexecuted_blocks=1 00:10:11.412 00:10:11.412 ' 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:11.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.412 --rc genhtml_branch_coverage=1 00:10:11.412 --rc genhtml_function_coverage=1 00:10:11.412 --rc genhtml_legend=1 00:10:11.412 --rc geninfo_all_blocks=1 00:10:11.412 --rc geninfo_unexecuted_blocks=1 00:10:11.412 00:10:11.412 ' 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:11.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.412 --rc genhtml_branch_coverage=1 00:10:11.412 --rc genhtml_function_coverage=1 00:10:11.412 --rc genhtml_legend=1 00:10:11.412 --rc geninfo_all_blocks=1 00:10:11.412 --rc geninfo_unexecuted_blocks=1 00:10:11.412 00:10:11.412 ' 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:11.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.412 --rc genhtml_branch_coverage=1 00:10:11.412 --rc genhtml_function_coverage=1 00:10:11.412 --rc genhtml_legend=1 00:10:11.412 --rc geninfo_all_blocks=1 00:10:11.412 --rc geninfo_unexecuted_blocks=1 00:10:11.412 00:10:11.412 ' 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.412 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:13.949 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:13.949 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.949 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:13.950 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:13.950 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:10:13.950 00:10:13.950 --- 10.0.0.2 ping statistics --- 00:10:13.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.950 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:10:13.950 00:10:13.950 --- 10.0.0.1 ping statistics --- 00:10:13.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.950 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:13.950 only one NIC for nvmf test 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.950 rmmod nvme_tcp 00:10:13.950 rmmod nvme_fabrics 00:10:13.950 rmmod nvme_keyring 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:13.950 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:10:13.950 22:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:13.950 22:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:10:13.950 22:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.950 22:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.950 22:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.950 22:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.950 22:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.857 00:10:15.857 real 0m4.649s 00:10:15.857 user 0m0.950s 00:10:15.857 sys 0m1.721s 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:15.857 ************************************ 00:10:15.857 END TEST nvmf_target_multipath 00:10:15.857 ************************************ 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.857 22:33:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.117 ************************************ 00:10:16.117 START TEST nvmf_zcopy 00:10:16.117 ************************************ 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.117 * Looking for test storage... 00:10:16.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:16.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.117 --rc genhtml_branch_coverage=1 00:10:16.117 --rc genhtml_function_coverage=1 00:10:16.117 --rc genhtml_legend=1 00:10:16.117 --rc geninfo_all_blocks=1 00:10:16.117 --rc geninfo_unexecuted_blocks=1 00:10:16.117 00:10:16.117 ' 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:16.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.117 --rc genhtml_branch_coverage=1 00:10:16.117 --rc genhtml_function_coverage=1 00:10:16.117 --rc genhtml_legend=1 00:10:16.117 --rc geninfo_all_blocks=1 00:10:16.117 --rc geninfo_unexecuted_blocks=1 00:10:16.117 00:10:16.117 ' 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:16.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.117 --rc genhtml_branch_coverage=1 00:10:16.117 --rc genhtml_function_coverage=1 00:10:16.117 --rc genhtml_legend=1 00:10:16.117 --rc geninfo_all_blocks=1 00:10:16.117 --rc geninfo_unexecuted_blocks=1 00:10:16.117 00:10:16.117 ' 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:16.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.117 --rc genhtml_branch_coverage=1 00:10:16.117 --rc genhtml_function_coverage=1 00:10:16.117 --rc genhtml_legend=1 00:10:16.117 --rc geninfo_all_blocks=1 00:10:16.117 --rc geninfo_unexecuted_blocks=1 00:10:16.117 00:10:16.117 ' 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.117 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:16.118 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:18.651 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:18.652 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:18.652 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:18.652 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:18.652 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:10:18.652 00:10:18.652 --- 10.0.0.2 ping statistics --- 00:10:18.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.652 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:10:18.652 00:10:18.652 --- 10.0.0.1 ping statistics --- 00:10:18.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.652 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=147232 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 147232 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 147232 ']' 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.652 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:18.653 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.653 [2024-10-11 22:33:21.723406] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:10:18.653 [2024-10-11 22:33:21.723496] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.653 [2024-10-11 22:33:21.787318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.653 [2024-10-11 22:33:21.828512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.653 [2024-10-11 22:33:21.828580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.653 [2024-10-11 22:33:21.828609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.653 [2024-10-11 22:33:21.828620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.653 [2024-10-11 22:33:21.828630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.653 [2024-10-11 22:33:21.829171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.911 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:18.911 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:18.911 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:18.911 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:18.911 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.911 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.912 [2024-10-11 22:33:21.971245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.912 [2024-10-11 22:33:21.987416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.912 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.912 malloc0 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:18.912 { 00:10:18.912 "params": { 00:10:18.912 "name": "Nvme$subsystem", 00:10:18.912 "trtype": "$TEST_TRANSPORT", 00:10:18.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.912 "adrfam": "ipv4", 00:10:18.912 "trsvcid": "$NVMF_PORT", 00:10:18.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.912 "hdgst": ${hdgst:-false}, 00:10:18.912 "ddgst": ${ddgst:-false} 00:10:18.912 }, 00:10:18.912 "method": "bdev_nvme_attach_controller" 00:10:18.912 } 00:10:18.912 EOF 00:10:18.912 )") 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:18.912 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:18.912 "params": { 00:10:18.912 "name": "Nvme1", 00:10:18.912 "trtype": "tcp", 00:10:18.912 "traddr": "10.0.0.2", 00:10:18.912 "adrfam": "ipv4", 00:10:18.912 "trsvcid": "4420", 00:10:18.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.912 "hdgst": false, 00:10:18.912 "ddgst": false 00:10:18.912 }, 00:10:18.912 "method": "bdev_nvme_attach_controller" 00:10:18.912 }' 00:10:18.912 [2024-10-11 22:33:22.072421] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:10:18.912 [2024-10-11 22:33:22.072499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147289 ] 00:10:18.912 [2024-10-11 22:33:22.135919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.171 [2024-10-11 22:33:22.185983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.429 Running I/O for 10 seconds... 00:10:21.295 5775.00 IOPS, 45.12 MiB/s [2024-10-11T20:33:25.939Z] 5836.50 IOPS, 45.60 MiB/s [2024-10-11T20:33:26.873Z] 5825.67 IOPS, 45.51 MiB/s [2024-10-11T20:33:27.807Z] 5818.50 IOPS, 45.46 MiB/s [2024-10-11T20:33:28.741Z] 5815.20 IOPS, 45.43 MiB/s [2024-10-11T20:33:29.675Z] 5823.67 IOPS, 45.50 MiB/s [2024-10-11T20:33:30.610Z] 5823.43 IOPS, 45.50 MiB/s [2024-10-11T20:33:31.985Z] 5817.75 IOPS, 45.45 MiB/s [2024-10-11T20:33:32.921Z] 5823.22 IOPS, 45.49 MiB/s [2024-10-11T20:33:32.921Z] 5826.60 IOPS, 45.52 MiB/s 00:10:29.653 Latency(us) 00:10:29.653 [2024-10-11T20:33:32.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.653 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:29.653 Verification LBA range: start 0x0 length 0x1000 00:10:29.653 Nvme1n1 : 10.02 5830.46 45.55 0.00 0.00 21895.13 2585.03 30486.38 00:10:29.653 [2024-10-11T20:33:32.921Z] =================================================================================================================== 00:10:29.653 [2024-10-11T20:33:32.921Z] Total : 5830.46 45.55 0.00 0.00 21895.13 2585.03 30486.38 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=148575 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:29.653 { 00:10:29.653 "params": { 00:10:29.653 "name": "Nvme$subsystem", 00:10:29.653 "trtype": "$TEST_TRANSPORT", 00:10:29.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.653 "adrfam": "ipv4", 00:10:29.653 "trsvcid": "$NVMF_PORT", 00:10:29.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.653 "hdgst": ${hdgst:-false}, 00:10:29.653 "ddgst": ${ddgst:-false} 00:10:29.653 }, 00:10:29.653 "method": "bdev_nvme_attach_controller" 00:10:29.653 } 00:10:29.653 EOF 00:10:29.653 )") 00:10:29.653 [2024-10-11 22:33:32.768763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.768806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:29.653 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:29.653 "params": { 00:10:29.653 "name": "Nvme1", 00:10:29.653 "trtype": "tcp", 00:10:29.653 "traddr": "10.0.0.2", 00:10:29.653 "adrfam": "ipv4", 00:10:29.653 "trsvcid": "4420", 00:10:29.653 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.653 "hdgst": false, 00:10:29.653 "ddgst": false 00:10:29.653 }, 00:10:29.653 "method": "bdev_nvme_attach_controller" 00:10:29.653 }' 00:10:29.653 [2024-10-11 22:33:32.776712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.776735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.784731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.784752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.792754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.792776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.800778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.800800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.807104] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:10:29.653 [2024-10-11 22:33:32.807175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148575 ] 00:10:29.653 [2024-10-11 22:33:32.808802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.808838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.816821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.816857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.824840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.824875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.832863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.832898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.840895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.840915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.848915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.848935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.856951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.856972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.864970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.864990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.867071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.653 [2024-10-11 22:33:32.873013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.653 [2024-10-11 22:33:32.873038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.653 [2024-10-11 22:33:32.881058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.654 [2024-10-11 22:33:32.881097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.654 [2024-10-11 22:33:32.889046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.654 [2024-10-11 22:33:32.889066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.654 [2024-10-11 22:33:32.897056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.654 [2024-10-11 22:33:32.897077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.654 [2024-10-11 22:33:32.905079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.654 [2024-10-11 22:33:32.905098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.654 [2024-10-11 22:33:32.913098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.654 [2024-10-11 22:33:32.913118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.654 [2024-10-11 22:33:32.915957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.654 [2024-10-11 22:33:32.921128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.654 [2024-10-11 22:33:32.921150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.912 [2024-10-11 22:33:32.929148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.912 [2024-10-11 22:33:32.929171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.912 [2024-10-11 22:33:32.937206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.912 [2024-10-11 22:33:32.937243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.912 [2024-10-11 22:33:32.945230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.912 [2024-10-11 22:33:32.945268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.912 [2024-10-11 22:33:32.953248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.912 [2024-10-11 22:33:32.953286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.912 [2024-10-11 22:33:32.961277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.912 [2024-10-11 22:33:32.961314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.912 [2024-10-11 22:33:32.969299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.912 [2024-10-11 22:33:32.969337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.912 [2024-10-11 22:33:32.977317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.912 [2024-10-11 22:33:32.977356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.912 [2024-10-11 22:33:32.985299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.912 [2024-10-11 22:33:32.985320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:32.993357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:32.993393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.001376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.001415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.009396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.009432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.017379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.017400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.025434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.025459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.033442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.033466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.041457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.041479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.049481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.049503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.057503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.057525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.065521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.065564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.073567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.073587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.081588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.081609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.089612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.089632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.097675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.097700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.105665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.105688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.113680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.113701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.121701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.121722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.129723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.129743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.137746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.137766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.145789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.145812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.153806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.153847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.161844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.161864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.169867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.169887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.913 [2024-10-11 22:33:33.177900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.913 [2024-10-11 22:33:33.177919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.171 [2024-10-11 22:33:33.185919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.171 [2024-10-11 22:33:33.185938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.171 [2024-10-11 22:33:33.193945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.171 [2024-10-11 22:33:33.193966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.171 [2024-10-11 22:33:33.201962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.171 [2024-10-11 22:33:33.201981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.171 [2024-10-11 22:33:33.209969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.171 [2024-10-11 22:33:33.209989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.171 [2024-10-11 22:33:33.217994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.171 [2024-10-11 22:33:33.218014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.171 [2024-10-11 22:33:33.226018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.171 [2024-10-11 22:33:33.226038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.171 [2024-10-11 22:33:33.234041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.171 [2024-10-11 22:33:33.234061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.171 [2024-10-11 22:33:33.242063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.242083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.250139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.250165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 Running I/O for 5 seconds... 00:10:30.172 [2024-10-11 22:33:33.258166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.258187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.271290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.271319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.282075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.282104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.292840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.292868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.303784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.303812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.314883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.314910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.327950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.327978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.338270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.338298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.348588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.348616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.358980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.359007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.369757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.369785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.380449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.380476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.392781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.392809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.402163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.402190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.413276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.413304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.424730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.424758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.172 [2024-10-11 22:33:33.437522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.172 [2024-10-11 22:33:33.437558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.447465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.447493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.458085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.458119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.468799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.468826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.481333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.481360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.491372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.491399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.501714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.501740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.512172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.512199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.522436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.522464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.532923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.532950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.543887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.543914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.554847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.554874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.567459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.567485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.577568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.577595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.587514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.587540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.598085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.598111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.608567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.608604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.619130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.619157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.630024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.630051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.640774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.640800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.653365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.653391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.664920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.664954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.674407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.430 [2024-10-11 22:33:33.674434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.430 [2024-10-11 22:33:33.685051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.431 [2024-10-11 22:33:33.685078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.431 [2024-10-11 22:33:33.698354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.431 [2024-10-11 22:33:33.698381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.708588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.708614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.719290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.719316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.729899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.729926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.740786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.740828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.751418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.751446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.762217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.762244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.774858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.774884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.784888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.784914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.795462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.795490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.806412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.806438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.819210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.819236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.829294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.829321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.839968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.839995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.853405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.853432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.863768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.863795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.874547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.874594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.887151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.887177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.899063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.899089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.908576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.908603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.918811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.918837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.929326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.929352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.939975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.940002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.689 [2024-10-11 22:33:33.950630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.689 [2024-10-11 22:33:33.950657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:33.963971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:33.963998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:33.974234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:33.974274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:33.984743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:33.984769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:33.995187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:33.995214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.005968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.005995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.016116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.016142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.026308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.026334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.036495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.036521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.047098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.047124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.057546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.057581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.067843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.067888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.078331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.078365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.088864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.088890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.099411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.099437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.111870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.111897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.123415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.123442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.132332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.132359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.143530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.143583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.155968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.155994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.165779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.165807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.176585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.176612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.187019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.187045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.197719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.197747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.948 [2024-10-11 22:33:34.208040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.948 [2024-10-11 22:33:34.208066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.219026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.219053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.231525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.231558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.241477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.241504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.252012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.252038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 11913.00 IOPS, 93.07 MiB/s [2024-10-11T20:33:34.475Z] [2024-10-11 22:33:34.262882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.262909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.275857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.275883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.286267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.286293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.296395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.296421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.306825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.306851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.317673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.317700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.328300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.328325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.341264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.341291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.351109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.351135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.362024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.362050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.374436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.374462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.384454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.384481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.394778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.394804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.405219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.405246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.417668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.417711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.429113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.429139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.437782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.437808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.449400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.449439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.461899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.461925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.207 [2024-10-11 22:33:34.471846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.207 [2024-10-11 22:33:34.471873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.482350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.482377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.492840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.492867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.503477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.503504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.516176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.516203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.526348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.526376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.537322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.537350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.548751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.548780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.559721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.559749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.572859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.572892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.583328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.583368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.594398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.594426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.605240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.605268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.616307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.616336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.627439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.627467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.638200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.638227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.651362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.651389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.663599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.663626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.672765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.672793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.684593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.684620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.695355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.695382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.705800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.705827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.716288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.716330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.466 [2024-10-11 22:33:34.727166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.466 [2024-10-11 22:33:34.727193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.724 [2024-10-11 22:33:34.740007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.724 [2024-10-11 22:33:34.740034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.724 [2024-10-11 22:33:34.750189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.724 [2024-10-11 22:33:34.750215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.724 [2024-10-11 22:33:34.761133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.724 [2024-10-11 22:33:34.761159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.724 [2024-10-11 22:33:34.773777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.724 [2024-10-11 22:33:34.773805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.724 [2024-10-11 22:33:34.783676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.724 [2024-10-11 22:33:34.783705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.724 [2024-10-11 22:33:34.794343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.724 [2024-10-11 22:33:34.794370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.724 [2024-10-11 22:33:34.805267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.724 [2024-10-11 22:33:34.805308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.724 [2024-10-11 22:33:34.816486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.724 [2024-10-11 22:33:34.816513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.724 [2024-10-11 22:33:34.827387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.724 [2024-10-11 22:33:34.827414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.724 [2024-10-11 22:33:34.841160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.724 [2024-10-11 22:33:34.841187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.724 [2024-10-11 22:33:34.851641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.724 [2024-10-11 22:33:34.851668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.724 [2024-10-11 22:33:34.862499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.725 [2024-10-11 22:33:34.862526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.725 [2024-10-11 22:33:34.873334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.725 [2024-10-11 22:33:34.873361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.725 [2024-10-11 22:33:34.884584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.725 [2024-10-11 22:33:34.884613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.725 [2024-10-11 22:33:34.896733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.725 [2024-10-11 22:33:34.896761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.725 [2024-10-11 22:33:34.905882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.725 [2024-10-11 22:33:34.905931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.725 [2024-10-11 22:33:34.917344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.725 [2024-10-11 22:33:34.917371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.725 [2024-10-11 22:33:34.930947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.725 [2024-10-11 22:33:34.930973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.725 [2024-10-11 22:33:34.941789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.725 [2024-10-11 22:33:34.941817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.725 [2024-10-11 22:33:34.952683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.725 [2024-10-11 22:33:34.952718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.725 [2024-10-11 22:33:34.963395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.725 [2024-10-11 22:33:34.963422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.725 [2024-10-11 22:33:34.974512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.725 [2024-10-11 22:33:34.974562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.725 [2024-10-11 22:33:34.985617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.725 [2024-10-11 22:33:34.985644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:34.996910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:34.996937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.007351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.007377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.018314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.018356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.029184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.029210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.040014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.040041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.050627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.050654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.063544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.063592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.073648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.073676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.084387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.084414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.095318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.095345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.106344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.106370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.119672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.119707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.130188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.130214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.141395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.141422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.154223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.154250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.164624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.164652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.175387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.175414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.187658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.983 [2024-10-11 22:33:35.187685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.983 [2024-10-11 22:33:35.197228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.984 [2024-10-11 22:33:35.197254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.984 [2024-10-11 22:33:35.208871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.984 [2024-10-11 22:33:35.208898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.984 [2024-10-11 22:33:35.219313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.984 [2024-10-11 22:33:35.219340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.984 [2024-10-11 22:33:35.230102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.984 [2024-10-11 22:33:35.230129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.984 [2024-10-11 22:33:35.240701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.984 [2024-10-11 22:33:35.240729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.984 [2024-10-11 22:33:35.251564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.984 [2024-10-11 22:33:35.251591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.262163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.262190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 11853.50 IOPS, 92.61 MiB/s [2024-10-11T20:33:35.510Z] [2024-10-11 22:33:35.273293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.273320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.286138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.286165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.296649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.296676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.307695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.307722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.320238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.320265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.330405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.330438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.341338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.341364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.354013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.354039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.364473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.364500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.375253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.375279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.387826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.387853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.399155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.399183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.408011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.408038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.419888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.419914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.430514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.430540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.440921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.440947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.451455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.451481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.462884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.462910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.473744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.473771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.485917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.485943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.495504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.495545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.242 [2024-10-11 22:33:35.505876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.242 [2024-10-11 22:33:35.505902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.518903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.518944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.528970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.528997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.539611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.539638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.550589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.550616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.561498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.561525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.573856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.573883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.582895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.582922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.594670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.594697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.607279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.607306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.617411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.617437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.628168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.628195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.638896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.638923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.649627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.649654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.660847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.660875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.671734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.671762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.684651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.684680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.694183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.694220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.705525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.705579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.716200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.716236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.727328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.727356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.738447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.738474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.751409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.751437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.501 [2024-10-11 22:33:35.761691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.501 [2024-10-11 22:33:35.761720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.772300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.772328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.783185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.783212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.794285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.794311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.806805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.806833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.817187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.817228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.827975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.828001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.839077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.839103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.850063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.850089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.862446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.862473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.872857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.872884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.883676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.883703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.896510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.896559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.907239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.907265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.917725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.917752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.928569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.928596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.939434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.939461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.950408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.950435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.963234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.963262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.972897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.972923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.984169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.984195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:35.994793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:35.994820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:36.005323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:36.005349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:36.016199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:36.016226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.760 [2024-10-11 22:33:36.026560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.760 [2024-10-11 22:33:36.026587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.037239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.037266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.047626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.047653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.058258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.058285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.069338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.069365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.080071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.080113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.091380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.091406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.104024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.104052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.114453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.114479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.125416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.125442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.137706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.137733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.147864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.147892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.158413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.158446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.169446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.169472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.181668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.181695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.190877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.190918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.203985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.204012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.214426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.214453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.225327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.225353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.237780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.237821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.247500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.247527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.258956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.258998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 11842.00 IOPS, 92.52 MiB/s [2024-10-11T20:33:36.286Z] [2024-10-11 22:33:36.272039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.272066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.018 [2024-10-11 22:33:36.282306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.018 [2024-10-11 22:33:36.282332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.293243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.293270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.306404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.306430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.316925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.316952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.327743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.327771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.341133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.341160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.351353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.351379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.362211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.362238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.373070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.373104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.384024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.384050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.396864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.396891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.406864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.406891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.417523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.417574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.430605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.430632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.442536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.442587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.452434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.452460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.462937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.462964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.473564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.473604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.484370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.484397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.494926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.494954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.505924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.505964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.516387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.516413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.526961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.526987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.276 [2024-10-11 22:33:36.537811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.276 [2024-10-11 22:33:36.537854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.550324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.550351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.560990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.561017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.571369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.571396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.582500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.582548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.593480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.593507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.604298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.604325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.617361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.617388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.627796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.627839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.638376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.638402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.649196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.649223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.662755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.662784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.672936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.672978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.683682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.683732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.694494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.694521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.705293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.705319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.718133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.718159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.728318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.728345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.739188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.739215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.750342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.750369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.761257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.761284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.772603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.772631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.783437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.783463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.534 [2024-10-11 22:33:36.796117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.534 [2024-10-11 22:33:36.796151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.806805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.806834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.817787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.817815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.828679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.828707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.839348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.839375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.852331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.852358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.863110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.863137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.873988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.874014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.884770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.884798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.895698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.895726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.906603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.906632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.917513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.917565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.930662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.930704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.941374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.941401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.952233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.952260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.964906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.964933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.975315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.975357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.985985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.986012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:36.996394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:36.996421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:37.006923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:37.006951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:37.017541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:37.017577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.792 [2024-10-11 22:33:37.028270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.792 [2024-10-11 22:33:37.028298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.793 [2024-10-11 22:33:37.040751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.793 [2024-10-11 22:33:37.040779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.793 [2024-10-11 22:33:37.051039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.793 [2024-10-11 22:33:37.051066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.061745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.061772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.074305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.074333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.084433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.084461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.094874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.094902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.105226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.105253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.115409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.115437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.125612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.125639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.136698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.136725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.148873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.148900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.158924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.158951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.169694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.169723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.182029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.182056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.191698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.191726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.202423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.202450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.213402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.213429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.226255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.226281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.236519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.236571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.247227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.247253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.258008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.258035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.269144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.269169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 11831.00 IOPS, 92.43 MiB/s [2024-10-11T20:33:37.319Z] [2024-10-11 22:33:37.279867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.279894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.290378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.290406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.301520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.301571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.051 [2024-10-11 22:33:37.312246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.051 [2024-10-11 22:33:37.312273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.324602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.324630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.334622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.334650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.345210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.345236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.355852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.355879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.366825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.366866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.379343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.379370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.389467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.389493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.400250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.400277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.411310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.411344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.422071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.422097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.433048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.433075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.443885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.443911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.457443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.457469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.467825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.467866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.478936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.478962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.492291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.492318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.502727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.502755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.513581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.513608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.525864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.525891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.535058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.535084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.546306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.546333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.559104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.559131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.309 [2024-10-11 22:33:37.569238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.309 [2024-10-11 22:33:37.569264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.567 [2024-10-11 22:33:37.580571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.567 [2024-10-11 22:33:37.580598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.567 [2024-10-11 22:33:37.591243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.567 [2024-10-11 22:33:37.591269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.567 [2024-10-11 22:33:37.602078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.567 [2024-10-11 22:33:37.602104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.567 [2024-10-11 22:33:37.614920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.567 [2024-10-11 22:33:37.614947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.567 [2024-10-11 22:33:37.625156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.567 [2024-10-11 22:33:37.625191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.567 [2024-10-11 22:33:37.635798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.567 [2024-10-11 22:33:37.635840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.646499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.646541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.657402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.657428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.670243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.670269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.680649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.680677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.691400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.691428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.704336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.704363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.714858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.714884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.725861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.725902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.738635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.738677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.749301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.749327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.760146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.760173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.770685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.770712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.781287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.781312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.791972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.791999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.803012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.803039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.813916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.813943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.568 [2024-10-11 22:33:37.824888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.568 [2024-10-11 22:33:37.824914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.837779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.837814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.848250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.848278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.859233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.859259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.871822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.871863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.882008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.882034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.892585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.892613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.903530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.903577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.914142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.914169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.926653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.926681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.936925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.936952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.947612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.947640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.958283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.958310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.971138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.971166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.981128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.981156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:37.992290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:37.992317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:38.007216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:38.007245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:38.017263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:38.017305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:38.028370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:38.028397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:38.039168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:38.039195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.826 [2024-10-11 22:33:38.050028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.826 [2024-10-11 22:33:38.050063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.827 [2024-10-11 22:33:38.063005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.827 [2024-10-11 22:33:38.063032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.827 [2024-10-11 22:33:38.073342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.827 [2024-10-11 22:33:38.073370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.827 [2024-10-11 22:33:38.084345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.827 [2024-10-11 22:33:38.084372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.097245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.097272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.107932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.107958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.118813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.118855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.129698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.129725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.140223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.140250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.150939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.150965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.163843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.163871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.174074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.174101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.184960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.184987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.197715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.197742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.208073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.208101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.218840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.218867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.229969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.229997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.240623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.240651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.253119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.253160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.263230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.263258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 11816.20 IOPS, 92.31 MiB/s [2024-10-11T20:33:38.353Z] [2024-10-11 22:33:38.273959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.273986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.281850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.281876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 00:10:35.085 Latency(us) 00:10:35.085 [2024-10-11T20:33:38.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.085 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:35.085 Nvme1n1 : 5.01 11818.12 92.33 0.00 0.00 10816.97 4733.16 20194.80 00:10:35.085 [2024-10-11T20:33:38.353Z] =================================================================================================================== 00:10:35.085 [2024-10-11T20:33:38.353Z] Total : 11818.12 92.33 0.00 0.00 10816.97 4733.16 20194.80 00:10:35.085 [2024-10-11 22:33:38.289887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.289925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.297894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.297923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.305979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.306025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.314000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.314048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.322013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.322057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.330034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.330080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.338062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.338108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.085 [2024-10-11 22:33:38.346100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.085 [2024-10-11 22:33:38.346143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.354122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.354164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.362138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.362184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.370154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.370200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.378178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.378225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.386198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.386246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.394222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.394266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.402240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.402286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.410262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.410308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.418260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.418300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.426239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.426264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.434321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.434364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.442349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.442394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.450366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.450410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.458315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.458336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.466335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.466354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 [2024-10-11 22:33:38.474355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.344 [2024-10-11 22:33:38.474375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (148575) - No such process 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 148575 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.344 delay0 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.344 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:35.344 [2024-10-11 22:33:38.591594] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:41.902 Initializing NVMe Controllers 00:10:41.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:41.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:41.902 Initialization complete. Launching workers. 00:10:41.902 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 69 00:10:41.902 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 356, failed to submit 33 00:10:41.902 success 163, unsuccessful 193, failed 0 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.902 rmmod nvme_tcp 00:10:41.902 rmmod nvme_fabrics 00:10:41.902 rmmod nvme_keyring 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 147232 ']' 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 147232 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 147232 ']' 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 147232 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 147232 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 147232' 00:10:41.902 killing process with pid 147232 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 147232 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 147232 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:41.902 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:10:41.902 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.903 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.903 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.903 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.903 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.806 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.806 00:10:43.806 real 0m27.916s 00:10:43.806 user 0m41.930s 00:10:43.806 sys 0m7.430s 00:10:43.806 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.806 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.806 ************************************ 00:10:43.806 END TEST nvmf_zcopy 00:10:43.806 ************************************ 00:10:43.806 22:33:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:43.806 22:33:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:43.806 22:33:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.806 22:33:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.065 ************************************ 00:10:44.065 START TEST nvmf_nmic 00:10:44.065 ************************************ 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:44.065 * Looking for test storage... 00:10:44.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:44.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.065 --rc genhtml_branch_coverage=1 00:10:44.065 --rc genhtml_function_coverage=1 00:10:44.065 --rc genhtml_legend=1 00:10:44.065 --rc geninfo_all_blocks=1 00:10:44.065 --rc geninfo_unexecuted_blocks=1 00:10:44.065 00:10:44.065 ' 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:44.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.065 --rc genhtml_branch_coverage=1 00:10:44.065 --rc genhtml_function_coverage=1 00:10:44.065 --rc genhtml_legend=1 00:10:44.065 --rc geninfo_all_blocks=1 00:10:44.065 --rc geninfo_unexecuted_blocks=1 00:10:44.065 00:10:44.065 ' 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:44.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.065 --rc genhtml_branch_coverage=1 00:10:44.065 --rc genhtml_function_coverage=1 00:10:44.065 --rc genhtml_legend=1 00:10:44.065 --rc geninfo_all_blocks=1 00:10:44.065 --rc geninfo_unexecuted_blocks=1 00:10:44.065 00:10:44.065 ' 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:44.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.065 --rc genhtml_branch_coverage=1 00:10:44.065 --rc genhtml_function_coverage=1 00:10:44.065 --rc genhtml_legend=1 00:10:44.065 --rc geninfo_all_blocks=1 00:10:44.065 --rc geninfo_unexecuted_blocks=1 00:10:44.065 00:10:44.065 ' 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.065 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.066 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:46.597 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:46.598 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:46.598 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:46.598 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:46.598 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:10:46.598 00:10:46.598 --- 10.0.0.2 ping statistics --- 00:10:46.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.598 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:10:46.598 00:10:46.598 --- 10.0.0.1 ping statistics --- 00:10:46.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.598 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=151970 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 151970 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 151970 ']' 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:46.598 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.598 [2024-10-11 22:33:49.691638] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:10:46.599 [2024-10-11 22:33:49.691712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.599 [2024-10-11 22:33:49.758390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.599 [2024-10-11 22:33:49.808581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.599 [2024-10-11 22:33:49.808635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.599 [2024-10-11 22:33:49.808665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.599 [2024-10-11 22:33:49.808676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.599 [2024-10-11 22:33:49.808686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.599 [2024-10-11 22:33:49.810141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.599 [2024-10-11 22:33:49.810223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.599 [2024-10-11 22:33:49.810226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.599 [2024-10-11 22:33:49.810165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.857 [2024-10-11 22:33:49.961602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.857 22:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.857 Malloc0 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.857 [2024-10-11 22:33:50.023659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:46.857 test case1: single bdev can't be used in multiple subsystems 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.857 [2024-10-11 22:33:50.047437] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:46.857 [2024-10-11 22:33:50.047474] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:46.857 [2024-10-11 22:33:50.047518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.857 request: 00:10:46.857 { 00:10:46.857 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:46.857 "namespace": { 00:10:46.857 "bdev_name": "Malloc0", 00:10:46.857 "no_auto_visible": false 00:10:46.857 }, 00:10:46.857 "method": "nvmf_subsystem_add_ns", 00:10:46.857 "req_id": 1 00:10:46.857 } 00:10:46.857 Got JSON-RPC error response 00:10:46.857 response: 00:10:46.857 { 00:10:46.857 "code": -32602, 00:10:46.857 "message": "Invalid parameters" 00:10:46.857 } 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:46.857 Adding namespace failed - expected result. 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:46.857 test case2: host connect to nvmf target in multiple paths 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.857 [2024-10-11 22:33:50.055619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.857 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:47.790 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:48.356 22:33:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.356 22:33:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:48.356 22:33:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.356 22:33:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:48.356 22:33:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:50.252 22:33:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:50.253 22:33:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:50.253 22:33:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.253 22:33:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:50.253 22:33:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.253 22:33:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:50.253 22:33:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:50.253 [global] 00:10:50.253 thread=1 00:10:50.253 invalidate=1 00:10:50.253 rw=write 00:10:50.253 time_based=1 00:10:50.253 runtime=1 00:10:50.253 ioengine=libaio 00:10:50.253 direct=1 00:10:50.253 bs=4096 00:10:50.253 iodepth=1 00:10:50.253 norandommap=0 00:10:50.253 numjobs=1 00:10:50.253 00:10:50.253 verify_dump=1 00:10:50.253 verify_backlog=512 00:10:50.253 verify_state_save=0 00:10:50.253 do_verify=1 00:10:50.253 verify=crc32c-intel 00:10:50.253 [job0] 00:10:50.253 filename=/dev/nvme0n1 00:10:50.253 Could not set queue depth (nvme0n1) 00:10:50.818 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.818 fio-3.35 00:10:50.818 Starting 1 thread 00:10:52.191 00:10:52.191 job0: (groupid=0, jobs=1): err= 0: pid=152511: Fri Oct 11 22:33:55 2024 00:10:52.191 read: IOPS=22, BW=89.4KiB/s (91.6kB/s)(92.0KiB/1029msec) 00:10:52.191 slat (nsec): min=12364, max=41052, avg=25195.43, stdev=9799.99 00:10:52.191 clat (usec): min=40785, max=41010, avg=40957.04, stdev=47.34 00:10:52.191 lat (usec): min=40800, max=41028, avg=40982.23, stdev=45.59 00:10:52.191 clat percentiles (usec): 00:10:52.191 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:52.191 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:52.191 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:52.191 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:52.191 | 99.99th=[41157] 00:10:52.191 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:10:52.191 slat (nsec): min=6031, max=57149, avg=14179.06, stdev=6745.37 00:10:52.191 clat (usec): min=124, max=273, avg=150.63, stdev=11.62 00:10:52.191 lat (usec): min=130, max=330, avg=164.81, stdev=13.91 00:10:52.191 clat percentiles (usec): 00:10:52.191 | 1.00th=[ 127], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:10:52.191 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:10:52.191 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 163], 95.00th=[ 167], 00:10:52.191 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 273], 99.95th=[ 273], 00:10:52.191 | 99.99th=[ 273] 00:10:52.191 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:52.191 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:52.191 lat (usec) : 250=95.51%, 500=0.19% 00:10:52.191 lat (msec) : 50=4.30% 00:10:52.191 cpu : usr=0.19%, sys=0.78%, ctx=535, majf=0, minf=1 00:10:52.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.192 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.192 00:10:52.192 Run status group 0 (all jobs): 00:10:52.192 READ: bw=89.4KiB/s (91.6kB/s), 89.4KiB/s-89.4KiB/s (91.6kB/s-91.6kB/s), io=92.0KiB (94.2kB), run=1029-1029msec 00:10:52.192 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:10:52.192 00:10:52.192 Disk stats (read/write): 00:10:52.192 nvme0n1: ios=69/512, merge=0/0, ticks=798/79, in_queue=877, util=91.48% 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.192 rmmod nvme_tcp 00:10:52.192 rmmod nvme_fabrics 00:10:52.192 rmmod nvme_keyring 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 151970 ']' 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 151970 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 151970 ']' 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 151970 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 151970 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 151970' 00:10:52.192 killing process with pid 151970 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 151970 00:10:52.192 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 151970 00:10:52.450 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:52.450 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:52.450 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:52.450 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:52.450 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:10:52.450 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:52.450 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:10:52.450 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.450 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.450 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.450 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.450 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.360 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.360 00:10:54.360 real 0m10.523s 00:10:54.360 user 0m24.003s 00:10:54.360 sys 0m2.757s 00:10:54.360 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.360 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.360 ************************************ 00:10:54.360 END TEST nvmf_nmic 00:10:54.360 ************************************ 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.620 ************************************ 00:10:54.620 START TEST nvmf_fio_target 00:10:54.620 ************************************ 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:54.620 * Looking for test storage... 00:10:54.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:54.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.620 --rc genhtml_branch_coverage=1 00:10:54.620 --rc genhtml_function_coverage=1 00:10:54.620 --rc genhtml_legend=1 00:10:54.620 --rc geninfo_all_blocks=1 00:10:54.620 --rc geninfo_unexecuted_blocks=1 00:10:54.620 00:10:54.620 ' 00:10:54.620 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:54.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.620 --rc genhtml_branch_coverage=1 00:10:54.620 --rc genhtml_function_coverage=1 00:10:54.621 --rc genhtml_legend=1 00:10:54.621 --rc geninfo_all_blocks=1 00:10:54.621 --rc geninfo_unexecuted_blocks=1 00:10:54.621 00:10:54.621 ' 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:54.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.621 --rc genhtml_branch_coverage=1 00:10:54.621 --rc genhtml_function_coverage=1 00:10:54.621 --rc genhtml_legend=1 00:10:54.621 --rc geninfo_all_blocks=1 00:10:54.621 --rc geninfo_unexecuted_blocks=1 00:10:54.621 00:10:54.621 ' 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:54.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.621 --rc genhtml_branch_coverage=1 00:10:54.621 --rc genhtml_function_coverage=1 00:10:54.621 --rc genhtml_legend=1 00:10:54.621 --rc geninfo_all_blocks=1 00:10:54.621 --rc geninfo_unexecuted_blocks=1 00:10:54.621 00:10:54.621 ' 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.621 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.155 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:57.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:57.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:57.156 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:57.156 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:10:57.156 00:10:57.156 --- 10.0.0.2 ping statistics --- 00:10:57.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.156 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:10:57.156 00:10:57.156 --- 10.0.0.1 ping statistics --- 00:10:57.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.156 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=154709 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 154709 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 154709 ']' 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.156 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.156 [2024-10-11 22:34:00.252728] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:10:57.156 [2024-10-11 22:34:00.252817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.156 [2024-10-11 22:34:00.318337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.156 [2024-10-11 22:34:00.365504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.156 [2024-10-11 22:34:00.365567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.156 [2024-10-11 22:34:00.365597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.156 [2024-10-11 22:34:00.365607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.156 [2024-10-11 22:34:00.365617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.156 [2024-10-11 22:34:00.367295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.156 [2024-10-11 22:34:00.367400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.156 [2024-10-11 22:34:00.367580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.156 [2024-10-11 22:34:00.367586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.415 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:57.415 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:57.415 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:57.415 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:57.415 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.415 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.415 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:57.673 [2024-10-11 22:34:00.760583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.673 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.931 22:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:57.931 22:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.189 22:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:58.189 22:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.446 22:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:58.446 22:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.705 22:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:58.705 22:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:59.271 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.530 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:59.530 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.788 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:59.789 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.046 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:00.046 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:00.304 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.561 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.561 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.819 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.819 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:01.076 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.334 [2024-10-11 22:34:04.445750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.334 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:01.591 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:01.849 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.414 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:02.414 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:02.414 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.415 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:02.415 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:02.415 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:04.940 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:04.940 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:04.940 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.940 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:04.940 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.940 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:04.940 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:04.940 [global] 00:11:04.940 thread=1 00:11:04.940 invalidate=1 00:11:04.940 rw=write 00:11:04.940 time_based=1 00:11:04.940 runtime=1 00:11:04.940 ioengine=libaio 00:11:04.940 direct=1 00:11:04.940 bs=4096 00:11:04.940 iodepth=1 00:11:04.940 norandommap=0 00:11:04.940 numjobs=1 00:11:04.940 00:11:04.940 verify_dump=1 00:11:04.940 verify_backlog=512 00:11:04.940 verify_state_save=0 00:11:04.940 do_verify=1 00:11:04.940 verify=crc32c-intel 00:11:04.940 [job0] 00:11:04.940 filename=/dev/nvme0n1 00:11:04.940 [job1] 00:11:04.940 filename=/dev/nvme0n2 00:11:04.940 [job2] 00:11:04.940 filename=/dev/nvme0n3 00:11:04.940 [job3] 00:11:04.940 filename=/dev/nvme0n4 00:11:04.940 Could not set queue depth (nvme0n1) 00:11:04.940 Could not set queue depth (nvme0n2) 00:11:04.940 Could not set queue depth (nvme0n3) 00:11:04.940 Could not set queue depth (nvme0n4) 00:11:04.940 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.940 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.940 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.940 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.940 fio-3.35 00:11:04.940 Starting 4 threads 00:11:05.872 00:11:05.873 job0: (groupid=0, jobs=1): err= 0: pid=155783: Fri Oct 11 22:34:09 2024 00:11:05.873 read: IOPS=1931, BW=7724KiB/s (7910kB/s)(7732KiB/1001msec) 00:11:05.873 slat (nsec): min=5496, max=54196, avg=13279.41, stdev=5127.00 00:11:05.873 clat (usec): min=171, max=41014, avg=305.18, stdev=1844.10 00:11:05.873 lat (usec): min=177, max=41032, avg=318.46, stdev=1844.22 00:11:05.873 clat percentiles (usec): 00:11:05.873 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:11:05.873 | 30.00th=[ 208], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 231], 00:11:05.873 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 245], 95.00th=[ 251], 00:11:05.873 | 99.00th=[ 277], 99.50th=[ 306], 99.90th=[41157], 99.95th=[41157], 00:11:05.873 | 99.99th=[41157] 00:11:05.873 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:05.873 slat (nsec): min=7580, max=58249, avg=14712.23, stdev=6461.58 00:11:05.873 clat (usec): min=126, max=270, avg=164.99, stdev=22.78 00:11:05.873 lat (usec): min=137, max=287, avg=179.70, stdev=26.12 00:11:05.873 clat percentiles (usec): 00:11:05.873 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:11:05.873 | 30.00th=[ 149], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 169], 00:11:05.873 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 208], 00:11:05.873 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 258], 99.95th=[ 260], 00:11:05.873 | 99.99th=[ 273] 00:11:05.873 bw ( KiB/s): min=11752, max=11752, per=84.27%, avg=11752.00, stdev= 0.00, samples=1 00:11:05.873 iops : min= 2938, max= 2938, avg=2938.00, stdev= 0.00, samples=1 00:11:05.873 lat (usec) : 250=96.71%, 500=3.19% 00:11:05.873 lat (msec) : 50=0.10% 00:11:05.873 cpu : usr=5.90%, sys=6.00%, ctx=3981, majf=0, minf=1 00:11:05.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.873 issued rwts: total=1933,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.873 job1: (groupid=0, jobs=1): err= 0: pid=155784: Fri Oct 11 22:34:09 2024 00:11:05.873 read: IOPS=271, BW=1086KiB/s (1112kB/s)(1116KiB/1028msec) 00:11:05.873 slat (nsec): min=5517, max=47316, avg=14619.47, stdev=6683.58 00:11:05.873 clat (usec): min=178, max=41240, avg=3291.80, stdev=10723.38 00:11:05.873 lat (usec): min=184, max=41256, avg=3306.42, stdev=10723.84 00:11:05.873 clat percentiles (usec): 00:11:05.873 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 225], 00:11:05.873 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 245], 00:11:05.873 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 314], 95.00th=[40633], 00:11:05.873 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:05.873 | 99.99th=[41157] 00:11:05.873 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:11:05.873 slat (nsec): min=7474, max=46780, avg=12883.24, stdev=6573.28 00:11:05.873 clat (usec): min=147, max=238, avg=185.64, stdev=19.28 00:11:05.873 lat (usec): min=155, max=274, avg=198.53, stdev=21.41 00:11:05.873 clat percentiles (usec): 00:11:05.873 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:11:05.873 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 192], 00:11:05.873 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 219], 00:11:05.873 | 99.00th=[ 231], 99.50th=[ 237], 99.90th=[ 239], 99.95th=[ 239], 00:11:05.873 | 99.99th=[ 239] 00:11:05.873 bw ( KiB/s): min= 4096, max= 4096, per=29.37%, avg=4096.00, stdev= 0.00, samples=1 00:11:05.873 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:05.873 lat (usec) : 250=90.52%, 500=6.83% 00:11:05.873 lat (msec) : 50=2.65% 00:11:05.873 cpu : usr=0.49%, sys=1.66%, ctx=792, majf=0, minf=1 00:11:05.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.873 issued rwts: total=279,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.873 job2: (groupid=0, jobs=1): err= 0: pid=155787: Fri Oct 11 22:34:09 2024 00:11:05.873 read: IOPS=20, BW=83.3KiB/s (85.3kB/s)(84.0KiB/1008msec) 00:11:05.873 slat (nsec): min=8639, max=46414, avg=27638.67, stdev=10461.48 00:11:05.873 clat (usec): min=40983, max=42114, avg=41828.22, stdev=346.10 00:11:05.873 lat (usec): min=40999, max=42123, avg=41855.86, stdev=349.83 00:11:05.873 clat percentiles (usec): 00:11:05.873 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:11:05.873 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:05.873 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:05.873 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:05.873 | 99.99th=[42206] 00:11:05.873 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:11:05.873 slat (usec): min=7, max=15666, avg=42.48, stdev=691.85 00:11:05.873 clat (usec): min=144, max=384, avg=205.22, stdev=38.15 00:11:05.873 lat (usec): min=152, max=16050, avg=247.70, stdev=700.90 00:11:05.873 clat percentiles (usec): 00:11:05.873 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 169], 00:11:05.873 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 210], 00:11:05.873 | 70.00th=[ 225], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 277], 00:11:05.873 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 383], 99.95th=[ 383], 00:11:05.873 | 99.99th=[ 383] 00:11:05.873 bw ( KiB/s): min= 4096, max= 4096, per=29.37%, avg=4096.00, stdev= 0.00, samples=1 00:11:05.873 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:05.873 lat (usec) : 250=83.68%, 500=12.38% 00:11:05.873 lat (msec) : 50=3.94% 00:11:05.873 cpu : usr=0.00%, sys=0.89%, ctx=537, majf=0, minf=1 00:11:05.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.873 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.873 job3: (groupid=0, jobs=1): err= 0: pid=155788: Fri Oct 11 22:34:09 2024 00:11:05.873 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:11:05.873 slat (nsec): min=7077, max=32934, avg=24512.18, stdev=9097.03 00:11:05.873 clat (usec): min=40877, max=41275, avg=40979.05, stdev=83.95 00:11:05.873 lat (usec): min=40910, max=41282, avg=41003.57, stdev=78.73 00:11:05.873 clat percentiles (usec): 00:11:05.873 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:05.873 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:05.873 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:05.873 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:05.873 | 99.99th=[41157] 00:11:05.873 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:11:05.873 slat (nsec): min=5854, max=42337, avg=11205.98, stdev=7448.57 00:11:05.873 clat (usec): min=143, max=402, avg=201.51, stdev=45.30 00:11:05.873 lat (usec): min=150, max=442, avg=212.72, stdev=48.35 00:11:05.873 clat percentiles (usec): 00:11:05.873 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 163], 00:11:05.873 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 204], 00:11:05.873 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 277], 00:11:05.873 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 404], 99.95th=[ 404], 00:11:05.873 | 99.99th=[ 404] 00:11:05.873 bw ( KiB/s): min= 4096, max= 4096, per=29.37%, avg=4096.00, stdev= 0.00, samples=1 00:11:05.873 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:05.873 lat (usec) : 250=85.21%, 500=10.67% 00:11:05.873 lat (msec) : 50=4.12% 00:11:05.873 cpu : usr=0.10%, sys=0.69%, ctx=534, majf=0, minf=1 00:11:05.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.873 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.873 00:11:05.873 Run status group 0 (all jobs): 00:11:05.873 READ: bw=8774KiB/s (8985kB/s), 83.3KiB/s-7724KiB/s (85.3kB/s-7910kB/s), io=9020KiB (9236kB), run=1001-1028msec 00:11:05.873 WRITE: bw=13.6MiB/s (14.3MB/s), 1992KiB/s-8184KiB/s (2040kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1028msec 00:11:05.873 00:11:05.873 Disk stats (read/write): 00:11:05.873 nvme0n1: ios=1588/2048, merge=0/0, ticks=422/300, in_queue=722, util=86.27% 00:11:05.873 nvme0n2: ios=208/512, merge=0/0, ticks=1086/92, in_queue=1178, util=90.83% 00:11:05.873 nvme0n3: ios=76/512, merge=0/0, ticks=1234/108, in_queue=1342, util=97.59% 00:11:05.873 nvme0n4: ios=17/512, merge=0/0, ticks=698/99, in_queue=797, util=89.53% 00:11:05.873 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:05.873 [global] 00:11:05.873 thread=1 00:11:05.873 invalidate=1 00:11:05.873 rw=randwrite 00:11:05.873 time_based=1 00:11:05.873 runtime=1 00:11:05.873 ioengine=libaio 00:11:05.873 direct=1 00:11:05.873 bs=4096 00:11:05.873 iodepth=1 00:11:05.873 norandommap=0 00:11:05.873 numjobs=1 00:11:05.873 00:11:05.873 verify_dump=1 00:11:05.873 verify_backlog=512 00:11:05.873 verify_state_save=0 00:11:05.873 do_verify=1 00:11:05.873 verify=crc32c-intel 00:11:05.873 [job0] 00:11:05.873 filename=/dev/nvme0n1 00:11:05.873 [job1] 00:11:05.873 filename=/dev/nvme0n2 00:11:05.873 [job2] 00:11:05.873 filename=/dev/nvme0n3 00:11:05.873 [job3] 00:11:05.873 filename=/dev/nvme0n4 00:11:06.131 Could not set queue depth (nvme0n1) 00:11:06.131 Could not set queue depth (nvme0n2) 00:11:06.131 Could not set queue depth (nvme0n3) 00:11:06.131 Could not set queue depth (nvme0n4) 00:11:06.131 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.131 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.131 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.131 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.131 fio-3.35 00:11:06.131 Starting 4 threads 00:11:07.512 00:11:07.512 job0: (groupid=0, jobs=1): err= 0: pid=156023: Fri Oct 11 22:34:10 2024 00:11:07.512 read: IOPS=790, BW=3162KiB/s (3238kB/s)(3184KiB/1007msec) 00:11:07.512 slat (nsec): min=5050, max=44579, avg=13501.19, stdev=4813.82 00:11:07.512 clat (usec): min=175, max=41350, avg=998.60, stdev=5547.92 00:11:07.512 lat (usec): min=180, max=41356, avg=1012.10, stdev=5548.46 00:11:07.512 clat percentiles (usec): 00:11:07.512 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:11:07.512 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 237], 00:11:07.512 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 269], 00:11:07.512 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:07.512 | 99.99th=[41157] 00:11:07.512 write: IOPS=1016, BW=4068KiB/s (4165kB/s)(4096KiB/1007msec); 0 zone resets 00:11:07.512 slat (nsec): min=7073, max=43171, avg=9783.67, stdev=3500.55 00:11:07.512 clat (usec): min=140, max=1991, avg=179.59, stdev=66.63 00:11:07.512 lat (usec): min=147, max=2004, avg=189.37, stdev=66.97 00:11:07.512 clat percentiles (usec): 00:11:07.512 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:11:07.512 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:11:07.512 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 219], 00:11:07.512 | 99.00th=[ 258], 99.50th=[ 367], 99.90th=[ 914], 99.95th=[ 1991], 00:11:07.512 | 99.99th=[ 1991] 00:11:07.512 bw ( KiB/s): min= 1240, max= 6952, per=20.89%, avg=4096.00, stdev=4038.99, samples=2 00:11:07.512 iops : min= 310, max= 1738, avg=1024.00, stdev=1009.75, samples=2 00:11:07.512 lat (usec) : 250=91.76%, 500=7.25%, 1000=0.05% 00:11:07.512 lat (msec) : 2=0.11%, 50=0.82% 00:11:07.512 cpu : usr=1.69%, sys=2.49%, ctx=1821, majf=0, minf=1 00:11:07.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.512 issued rwts: total=796,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.512 job1: (groupid=0, jobs=1): err= 0: pid=156028: Fri Oct 11 22:34:10 2024 00:11:07.512 read: IOPS=1583, BW=6334KiB/s (6486kB/s)(6340KiB/1001msec) 00:11:07.512 slat (nsec): min=4730, max=49154, avg=12591.01, stdev=6129.45 00:11:07.512 clat (usec): min=164, max=41225, avg=361.09, stdev=2286.57 00:11:07.512 lat (usec): min=169, max=41235, avg=373.69, stdev=2286.99 00:11:07.512 clat percentiles (usec): 00:11:07.512 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 198], 00:11:07.512 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 231], 00:11:07.512 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 262], 95.00th=[ 355], 00:11:07.512 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[41157], 99.95th=[41157], 00:11:07.512 | 99.99th=[41157] 00:11:07.512 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:07.512 slat (nsec): min=6950, max=58679, avg=13498.97, stdev=6052.17 00:11:07.512 clat (usec): min=127, max=833, avg=178.75, stdev=30.87 00:11:07.512 lat (usec): min=136, max=841, avg=192.25, stdev=32.78 00:11:07.512 clat percentiles (usec): 00:11:07.512 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 159], 00:11:07.512 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 180], 00:11:07.512 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 206], 95.00th=[ 223], 00:11:07.512 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 334], 99.95th=[ 486], 00:11:07.512 | 99.99th=[ 832] 00:11:07.512 bw ( KiB/s): min=10240, max=10240, per=52.23%, avg=10240.00, stdev= 0.00, samples=1 00:11:07.512 iops : min= 2560, max= 2560, avg=2560.00, stdev= 0.00, samples=1 00:11:07.512 lat (usec) : 250=92.07%, 500=7.35%, 750=0.41%, 1000=0.03% 00:11:07.512 lat (msec) : 50=0.14% 00:11:07.512 cpu : usr=3.90%, sys=5.40%, ctx=3633, majf=0, minf=2 00:11:07.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.512 issued rwts: total=1585,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.512 job2: (groupid=0, jobs=1): err= 0: pid=156029: Fri Oct 11 22:34:10 2024 00:11:07.512 read: IOPS=28, BW=114KiB/s (116kB/s)(116KiB/1021msec) 00:11:07.512 slat (nsec): min=8408, max=34367, avg=20258.97, stdev=7995.22 00:11:07.512 clat (usec): min=283, max=41349, avg=31226.54, stdev=17660.55 00:11:07.512 lat (usec): min=292, max=41377, avg=31246.80, stdev=17664.39 00:11:07.512 clat percentiles (usec): 00:11:07.512 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 310], 20.00th=[ 445], 00:11:07.512 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:07.512 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:07.512 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:07.512 | 99.99th=[41157] 00:11:07.512 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:11:07.512 slat (nsec): min=5992, max=36600, avg=9022.98, stdev=5507.42 00:11:07.512 clat (usec): min=141, max=424, avg=211.38, stdev=47.50 00:11:07.512 lat (usec): min=147, max=431, avg=220.40, stdev=48.40 00:11:07.512 clat percentiles (usec): 00:11:07.512 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:11:07.512 | 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 219], 60.00th=[ 231], 00:11:07.512 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 285], 00:11:07.512 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 424], 99.95th=[ 424], 00:11:07.512 | 99.99th=[ 424] 00:11:07.512 bw ( KiB/s): min= 4096, max= 4096, per=20.89%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.512 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.512 lat (usec) : 250=82.81%, 500=12.94% 00:11:07.512 lat (msec) : 2=0.18%, 50=4.07% 00:11:07.512 cpu : usr=0.20%, sys=0.49%, ctx=544, majf=0, minf=1 00:11:07.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.512 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.512 job3: (groupid=0, jobs=1): err= 0: pid=156030: Fri Oct 11 22:34:10 2024 00:11:07.512 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:07.512 slat (nsec): min=5555, max=74200, avg=13558.84, stdev=9355.51 00:11:07.512 clat (usec): min=178, max=42347, avg=655.42, stdev=4021.16 00:11:07.512 lat (usec): min=185, max=42358, avg=668.98, stdev=4022.35 00:11:07.512 clat percentiles (usec): 00:11:07.512 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 200], 00:11:07.512 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 237], 00:11:07.512 | 70.00th=[ 258], 80.00th=[ 289], 90.00th=[ 392], 95.00th=[ 482], 00:11:07.512 | 99.00th=[ 2638], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:11:07.512 | 99.99th=[42206] 00:11:07.512 write: IOPS=1418, BW=5674KiB/s (5811kB/s)(5680KiB/1001msec); 0 zone resets 00:11:07.512 slat (nsec): min=5740, max=70302, avg=14002.51, stdev=8126.62 00:11:07.512 clat (usec): min=131, max=789, avg=201.42, stdev=58.77 00:11:07.512 lat (usec): min=139, max=795, avg=215.42, stdev=60.49 00:11:07.512 clat percentiles (usec): 00:11:07.512 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:11:07.512 | 30.00th=[ 163], 40.00th=[ 174], 50.00th=[ 184], 60.00th=[ 198], 00:11:07.512 | 70.00th=[ 223], 80.00th=[ 239], 90.00th=[ 262], 95.00th=[ 322], 00:11:07.512 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 709], 99.95th=[ 791], 00:11:07.512 | 99.99th=[ 791] 00:11:07.512 bw ( KiB/s): min= 4096, max= 4096, per=20.89%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.513 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.513 lat (usec) : 250=78.81%, 500=19.52%, 750=1.19%, 1000=0.04% 00:11:07.513 lat (msec) : 4=0.04%, 50=0.41% 00:11:07.513 cpu : usr=1.50%, sys=3.80%, ctx=2444, majf=0, minf=1 00:11:07.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.513 issued rwts: total=1024,1420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.513 00:11:07.513 Run status group 0 (all jobs): 00:11:07.513 READ: bw=13.1MiB/s (13.8MB/s), 114KiB/s-6334KiB/s (116kB/s-6486kB/s), io=13.4MiB (14.1MB), run=1001-1021msec 00:11:07.513 WRITE: bw=19.1MiB/s (20.1MB/s), 2006KiB/s-8184KiB/s (2054kB/s-8380kB/s), io=19.5MiB (20.5MB), run=1001-1021msec 00:11:07.513 00:11:07.513 Disk stats (read/write): 00:11:07.513 nvme0n1: ios=720/1024, merge=0/0, ticks=661/165, in_queue=826, util=87.27% 00:11:07.513 nvme0n2: ios=1464/1536, merge=0/0, ticks=540/270, in_queue=810, util=86.90% 00:11:07.513 nvme0n3: ios=48/512, merge=0/0, ticks=870/103, in_queue=973, util=96.35% 00:11:07.513 nvme0n4: ios=970/1024, merge=0/0, ticks=605/194, in_queue=799, util=89.60% 00:11:07.513 22:34:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:07.513 [global] 00:11:07.513 thread=1 00:11:07.513 invalidate=1 00:11:07.513 rw=write 00:11:07.513 time_based=1 00:11:07.513 runtime=1 00:11:07.513 ioengine=libaio 00:11:07.513 direct=1 00:11:07.513 bs=4096 00:11:07.513 iodepth=128 00:11:07.513 norandommap=0 00:11:07.513 numjobs=1 00:11:07.513 00:11:07.513 verify_dump=1 00:11:07.513 verify_backlog=512 00:11:07.513 verify_state_save=0 00:11:07.513 do_verify=1 00:11:07.513 verify=crc32c-intel 00:11:07.513 [job0] 00:11:07.513 filename=/dev/nvme0n1 00:11:07.513 [job1] 00:11:07.513 filename=/dev/nvme0n2 00:11:07.513 [job2] 00:11:07.513 filename=/dev/nvme0n3 00:11:07.513 [job3] 00:11:07.513 filename=/dev/nvme0n4 00:11:07.513 Could not set queue depth (nvme0n1) 00:11:07.513 Could not set queue depth (nvme0n2) 00:11:07.513 Could not set queue depth (nvme0n3) 00:11:07.513 Could not set queue depth (nvme0n4) 00:11:07.771 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.771 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.771 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.771 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.771 fio-3.35 00:11:07.771 Starting 4 threads 00:11:09.146 00:11:09.146 job0: (groupid=0, jobs=1): err= 0: pid=156254: Fri Oct 11 22:34:12 2024 00:11:09.146 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:11:09.146 slat (usec): min=2, max=11762, avg=98.98, stdev=674.85 00:11:09.146 clat (usec): min=2093, max=33501, avg=12709.72, stdev=3913.79 00:11:09.146 lat (usec): min=2100, max=33517, avg=12808.69, stdev=3962.14 00:11:09.146 clat percentiles (usec): 00:11:09.146 | 1.00th=[ 5145], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10028], 00:11:09.146 | 30.00th=[10552], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:11:09.146 | 70.00th=[13698], 80.00th=[14484], 90.00th=[17695], 95.00th=[19792], 00:11:09.146 | 99.00th=[28705], 99.50th=[30278], 99.90th=[33424], 99.95th=[33424], 00:11:09.146 | 99.99th=[33424] 00:11:09.146 write: IOPS=4842, BW=18.9MiB/s (19.8MB/s)(19.1MiB/1011msec); 0 zone resets 00:11:09.146 slat (usec): min=3, max=9672, avg=98.53, stdev=513.61 00:11:09.146 clat (usec): min=1334, max=35318, avg=14223.78, stdev=6018.31 00:11:09.146 lat (usec): min=1344, max=35325, avg=14322.31, stdev=6062.15 00:11:09.146 clat percentiles (usec): 00:11:09.146 | 1.00th=[ 4146], 5.00th=[ 6325], 10.00th=[ 8455], 20.00th=[10290], 00:11:09.146 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11994], 60.00th=[12518], 00:11:09.146 | 70.00th=[15533], 80.00th=[22152], 90.00th=[23462], 95.00th=[25035], 00:11:09.146 | 99.00th=[28705], 99.50th=[31065], 99.90th=[33424], 99.95th=[35390], 00:11:09.146 | 99.99th=[35390] 00:11:09.146 bw ( KiB/s): min=18992, max=19160, per=31.24%, avg=19076.00, stdev=118.79, samples=2 00:11:09.146 iops : min= 4748, max= 4790, avg=4769.00, stdev=29.70, samples=2 00:11:09.146 lat (msec) : 2=0.07%, 4=0.51%, 10=17.15%, 20=67.56%, 50=14.71% 00:11:09.146 cpu : usr=6.04%, sys=11.49%, ctx=479, majf=0, minf=1 00:11:09.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:09.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.146 issued rwts: total=4608,4896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.146 job1: (groupid=0, jobs=1): err= 0: pid=156255: Fri Oct 11 22:34:12 2024 00:11:09.146 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:11:09.146 slat (usec): min=2, max=14480, avg=140.75, stdev=797.62 00:11:09.146 clat (usec): min=7907, max=41334, avg=18437.58, stdev=9339.96 00:11:09.146 lat (usec): min=8801, max=41348, avg=18578.33, stdev=9377.01 00:11:09.146 clat percentiles (usec): 00:11:09.146 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[11076], 20.00th=[12125], 00:11:09.146 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13960], 00:11:09.146 | 70.00th=[18482], 80.00th=[27919], 90.00th=[35914], 95.00th=[36963], 00:11:09.146 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:09.146 | 99.99th=[41157] 00:11:09.146 write: IOPS=3864, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1002msec); 0 zone resets 00:11:09.146 slat (usec): min=3, max=22236, avg=121.79, stdev=767.80 00:11:09.146 clat (usec): min=227, max=62306, avg=15696.06, stdev=7239.47 00:11:09.146 lat (usec): min=2800, max=62312, avg=15817.84, stdev=7270.85 00:11:09.146 clat percentiles (usec): 00:11:09.146 | 1.00th=[ 5735], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11731], 00:11:09.146 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13173], 60.00th=[13960], 00:11:09.146 | 70.00th=[16188], 80.00th=[19268], 90.00th=[21365], 95.00th=[30540], 00:11:09.146 | 99.00th=[45876], 99.50th=[47449], 99.90th=[62129], 99.95th=[62129], 00:11:09.146 | 99.99th=[62129] 00:11:09.147 bw ( KiB/s): min=13568, max=16384, per=24.53%, avg=14976.00, stdev=1991.21, samples=2 00:11:09.147 iops : min= 3392, max= 4096, avg=3744.00, stdev=497.80, samples=2 00:11:09.147 lat (usec) : 250=0.01% 00:11:09.147 lat (msec) : 4=0.43%, 10=4.24%, 20=72.02%, 50=23.22%, 100=0.08% 00:11:09.147 cpu : usr=3.40%, sys=4.60%, ctx=356, majf=0, minf=1 00:11:09.147 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:09.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.147 issued rwts: total=3584,3872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.147 job2: (groupid=0, jobs=1): err= 0: pid=156257: Fri Oct 11 22:34:12 2024 00:11:09.147 read: IOPS=3984, BW=15.6MiB/s (16.3MB/s)(16.2MiB/1044msec) 00:11:09.147 slat (usec): min=2, max=8255, avg=110.57, stdev=563.23 00:11:09.147 clat (usec): min=6698, max=51814, avg=14957.89, stdev=4593.40 00:11:09.147 lat (usec): min=6703, max=51826, avg=15068.46, stdev=4587.12 00:11:09.147 clat percentiles (usec): 00:11:09.147 | 1.00th=[ 9765], 5.00th=[11469], 10.00th=[12256], 20.00th=[12911], 00:11:09.147 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:11:09.147 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16319], 95.00th=[18744], 00:11:09.147 | 99.00th=[45351], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:11:09.147 | 99.99th=[51643] 00:11:09.147 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:11:09.147 slat (usec): min=3, max=13722, avg=110.37, stdev=590.53 00:11:09.147 clat (usec): min=8180, max=59065, avg=15127.32, stdev=5975.10 00:11:09.147 lat (usec): min=8193, max=59071, avg=15237.69, stdev=5972.63 00:11:09.147 clat percentiles (usec): 00:11:09.147 | 1.00th=[ 9241], 5.00th=[11076], 10.00th=[11600], 20.00th=[12387], 00:11:09.147 | 30.00th=[12911], 40.00th=[13304], 50.00th=[14222], 60.00th=[14746], 00:11:09.147 | 70.00th=[15139], 80.00th=[15664], 90.00th=[17171], 95.00th=[21627], 00:11:09.147 | 99.00th=[53740], 99.50th=[56361], 99.90th=[58983], 99.95th=[58983], 00:11:09.147 | 99.99th=[58983] 00:11:09.147 bw ( KiB/s): min=16384, max=19976, per=29.78%, avg=18180.00, stdev=2539.93, samples=2 00:11:09.147 iops : min= 4096, max= 4994, avg=4545.00, stdev=634.98, samples=2 00:11:09.147 lat (msec) : 10=1.20%, 20=93.96%, 50=3.87%, 100=0.98% 00:11:09.147 cpu : usr=3.64%, sys=5.94%, ctx=419, majf=0, minf=2 00:11:09.147 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:09.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.147 issued rwts: total=4160,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.147 job3: (groupid=0, jobs=1): err= 0: pid=156258: Fri Oct 11 22:34:12 2024 00:11:09.147 read: IOPS=2193, BW=8775KiB/s (8985kB/s)(8836KiB/1007msec) 00:11:09.147 slat (usec): min=2, max=23424, avg=216.63, stdev=1308.44 00:11:09.147 clat (usec): min=2276, max=59905, avg=27086.36, stdev=11635.90 00:11:09.147 lat (usec): min=6850, max=59909, avg=27302.99, stdev=11673.12 00:11:09.147 clat percentiles (usec): 00:11:09.147 | 1.00th=[ 6980], 5.00th=[13960], 10.00th=[14353], 20.00th=[17957], 00:11:09.147 | 30.00th=[20055], 40.00th=[22938], 50.00th=[23200], 60.00th=[26346], 00:11:09.147 | 70.00th=[30016], 80.00th=[36439], 90.00th=[43254], 95.00th=[52691], 00:11:09.147 | 99.00th=[58983], 99.50th=[58983], 99.90th=[60031], 99.95th=[60031], 00:11:09.147 | 99.99th=[60031] 00:11:09.147 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:11:09.147 slat (usec): min=3, max=17327, avg=198.43, stdev=1195.22 00:11:09.147 clat (usec): min=10535, max=58969, avg=26490.87, stdev=12365.22 00:11:09.147 lat (usec): min=10540, max=59003, avg=26689.30, stdev=12414.95 00:11:09.147 clat percentiles (usec): 00:11:09.147 | 1.00th=[13173], 5.00th=[14091], 10.00th=[14877], 20.00th=[17171], 00:11:09.147 | 30.00th=[19530], 40.00th=[20317], 50.00th=[22676], 60.00th=[23987], 00:11:09.147 | 70.00th=[27132], 80.00th=[31851], 90.00th=[51643], 95.00th=[53740], 00:11:09.147 | 99.00th=[58983], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:11:09.147 | 99.99th=[58983] 00:11:09.147 bw ( KiB/s): min= 8192, max=12288, per=16.77%, avg=10240.00, stdev=2896.31, samples=2 00:11:09.147 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:09.147 lat (msec) : 4=0.02%, 10=0.67%, 20=29.94%, 50=59.38%, 100=9.98% 00:11:09.147 cpu : usr=1.49%, sys=3.98%, ctx=226, majf=0, minf=1 00:11:09.147 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:09.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.147 issued rwts: total=2209,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.147 00:11:09.147 Run status group 0 (all jobs): 00:11:09.147 READ: bw=54.5MiB/s (57.1MB/s), 8775KiB/s-17.8MiB/s (8985kB/s-18.7MB/s), io=56.9MiB (59.6MB), run=1002-1044msec 00:11:09.147 WRITE: bw=59.6MiB/s (62.5MB/s), 9.93MiB/s-18.9MiB/s (10.4MB/s-19.8MB/s), io=62.2MiB (65.3MB), run=1002-1044msec 00:11:09.147 00:11:09.147 Disk stats (read/write): 00:11:09.147 nvme0n1: ios=4046/4096, merge=0/0, ticks=47998/55722, in_queue=103720, util=99.90% 00:11:09.147 nvme0n2: ios=2947/3072, merge=0/0, ticks=14513/11829, in_queue=26342, util=100.00% 00:11:09.147 nvme0n3: ios=3584/3789, merge=0/0, ticks=13343/13749, in_queue=27092, util=88.96% 00:11:09.147 nvme0n4: ios=2094/2196, merge=0/0, ticks=20318/18368, in_queue=38686, util=98.95% 00:11:09.147 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:09.147 [global] 00:11:09.147 thread=1 00:11:09.147 invalidate=1 00:11:09.147 rw=randwrite 00:11:09.147 time_based=1 00:11:09.147 runtime=1 00:11:09.147 ioengine=libaio 00:11:09.147 direct=1 00:11:09.147 bs=4096 00:11:09.147 iodepth=128 00:11:09.147 norandommap=0 00:11:09.147 numjobs=1 00:11:09.147 00:11:09.147 verify_dump=1 00:11:09.147 verify_backlog=512 00:11:09.147 verify_state_save=0 00:11:09.147 do_verify=1 00:11:09.147 verify=crc32c-intel 00:11:09.147 [job0] 00:11:09.147 filename=/dev/nvme0n1 00:11:09.147 [job1] 00:11:09.147 filename=/dev/nvme0n2 00:11:09.147 [job2] 00:11:09.147 filename=/dev/nvme0n3 00:11:09.147 [job3] 00:11:09.147 filename=/dev/nvme0n4 00:11:09.147 Could not set queue depth (nvme0n1) 00:11:09.147 Could not set queue depth (nvme0n2) 00:11:09.147 Could not set queue depth (nvme0n3) 00:11:09.147 Could not set queue depth (nvme0n4) 00:11:09.147 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.147 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.147 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.147 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.147 fio-3.35 00:11:09.147 Starting 4 threads 00:11:10.522 00:11:10.522 job0: (groupid=0, jobs=1): err= 0: pid=156608: Fri Oct 11 22:34:13 2024 00:11:10.522 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:11:10.523 slat (usec): min=2, max=12038, avg=114.27, stdev=781.98 00:11:10.523 clat (usec): min=4021, max=39670, avg=14478.71, stdev=4505.53 00:11:10.523 lat (usec): min=4029, max=39692, avg=14592.98, stdev=4569.64 00:11:10.523 clat percentiles (usec): 00:11:10.523 | 1.00th=[ 6128], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:11:10.523 | 30.00th=[11338], 40.00th=[12387], 50.00th=[13698], 60.00th=[15664], 00:11:10.523 | 70.00th=[16188], 80.00th=[17171], 90.00th=[19006], 95.00th=[23200], 00:11:10.523 | 99.00th=[30016], 99.50th=[31327], 99.90th=[39584], 99.95th=[39584], 00:11:10.523 | 99.99th=[39584] 00:11:10.523 write: IOPS=4101, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1005msec); 0 zone resets 00:11:10.523 slat (usec): min=4, max=28096, avg=117.62, stdev=862.84 00:11:10.523 clat (usec): min=2560, max=47878, avg=16518.36, stdev=7172.16 00:11:10.523 lat (usec): min=2567, max=47895, avg=16635.98, stdev=7233.50 00:11:10.523 clat percentiles (usec): 00:11:10.523 | 1.00th=[ 3884], 5.00th=[ 8029], 10.00th=[10552], 20.00th=[11207], 00:11:10.523 | 30.00th=[11600], 40.00th=[11994], 50.00th=[13698], 60.00th=[16450], 00:11:10.523 | 70.00th=[20841], 80.00th=[23200], 90.00th=[25297], 95.00th=[29754], 00:11:10.523 | 99.00th=[38536], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:11:10.523 | 99.99th=[47973] 00:11:10.523 bw ( KiB/s): min=16400, max=16400, per=25.13%, avg=16400.00, stdev= 0.00, samples=2 00:11:10.523 iops : min= 4100, max= 4100, avg=4100.00, stdev= 0.00, samples=2 00:11:10.523 lat (msec) : 4=0.62%, 10=5.33%, 20=73.50%, 50=20.55% 00:11:10.523 cpu : usr=4.88%, sys=8.37%, ctx=412, majf=0, minf=1 00:11:10.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:10.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.523 issued rwts: total=4096,4122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.523 job1: (groupid=0, jobs=1): err= 0: pid=156609: Fri Oct 11 22:34:13 2024 00:11:10.523 read: IOPS=4468, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1002msec) 00:11:10.523 slat (usec): min=2, max=31549, avg=104.82, stdev=810.09 00:11:10.523 clat (usec): min=839, max=71919, avg=13908.74, stdev=8226.69 00:11:10.523 lat (usec): min=5077, max=71936, avg=14013.57, stdev=8291.10 00:11:10.523 clat percentiles (usec): 00:11:10.523 | 1.00th=[ 5932], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[10814], 00:11:10.523 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:11:10.523 | 70.00th=[12125], 80.00th=[13304], 90.00th=[17171], 95.00th=[31589], 00:11:10.523 | 99.00th=[56886], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:11:10.523 | 99.99th=[71828] 00:11:10.523 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:11:10.523 slat (usec): min=3, max=22806, avg=104.86, stdev=804.25 00:11:10.523 clat (usec): min=5825, max=66028, avg=13303.54, stdev=9832.53 00:11:10.523 lat (usec): min=6004, max=66043, avg=13408.39, stdev=9901.90 00:11:10.523 clat percentiles (usec): 00:11:10.523 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10552], 00:11:10.523 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:11:10.523 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12780], 95.00th=[29492], 00:11:10.523 | 99.00th=[62129], 99.50th=[65799], 99.90th=[65799], 99.95th=[65799], 00:11:10.523 | 99.99th=[65799] 00:11:10.523 bw ( KiB/s): min=14608, max=22256, per=28.25%, avg=18432.00, stdev=5407.95, samples=2 00:11:10.523 iops : min= 3652, max= 5564, avg=4608.00, stdev=1351.99, samples=2 00:11:10.523 lat (usec) : 1000=0.01% 00:11:10.523 lat (msec) : 10=9.53%, 20=83.46%, 50=4.23%, 100=2.77% 00:11:10.523 cpu : usr=6.29%, sys=9.49%, ctx=322, majf=0, minf=1 00:11:10.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:10.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.523 issued rwts: total=4477,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.523 job2: (groupid=0, jobs=1): err= 0: pid=156610: Fri Oct 11 22:34:13 2024 00:11:10.523 read: IOPS=2915, BW=11.4MiB/s (11.9MB/s)(11.5MiB/1006msec) 00:11:10.523 slat (usec): min=2, max=22945, avg=176.23, stdev=1130.71 00:11:10.523 clat (usec): min=1022, max=62371, avg=21841.16, stdev=11229.38 00:11:10.523 lat (usec): min=4707, max=62385, avg=22017.39, stdev=11332.74 00:11:10.523 clat percentiles (usec): 00:11:10.523 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[11469], 20.00th=[12125], 00:11:10.523 | 30.00th=[12518], 40.00th=[15664], 50.00th=[16909], 60.00th=[24511], 00:11:10.523 | 70.00th=[26608], 80.00th=[29230], 90.00th=[35914], 95.00th=[43779], 00:11:10.523 | 99.00th=[54264], 99.50th=[56886], 99.90th=[58459], 99.95th=[58983], 00:11:10.523 | 99.99th=[62129] 00:11:10.523 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:11:10.523 slat (usec): min=3, max=16160, avg=136.59, stdev=729.70 00:11:10.523 clat (usec): min=500, max=58350, avg=20689.16, stdev=9759.22 00:11:10.523 lat (usec): min=507, max=59656, avg=20825.75, stdev=9824.54 00:11:10.523 clat percentiles (usec): 00:11:10.523 | 1.00th=[ 3294], 5.00th=[ 7898], 10.00th=[11207], 20.00th=[12649], 00:11:10.523 | 30.00th=[14615], 40.00th=[18744], 50.00th=[20055], 60.00th=[22676], 00:11:10.523 | 70.00th=[23200], 80.00th=[25035], 90.00th=[30540], 95.00th=[41681], 00:11:10.523 | 99.00th=[53216], 99.50th=[54789], 99.90th=[54789], 99.95th=[58459], 00:11:10.523 | 99.99th=[58459] 00:11:10.523 bw ( KiB/s): min= 8840, max=15736, per=18.83%, avg=12288.00, stdev=4876.21, samples=2 00:11:10.523 iops : min= 2210, max= 3934, avg=3072.00, stdev=1219.05, samples=2 00:11:10.523 lat (usec) : 750=0.05%, 1000=0.15% 00:11:10.523 lat (msec) : 2=0.02%, 4=2.00%, 10=3.78%, 20=43.85%, 50=46.93% 00:11:10.523 lat (msec) : 100=3.23% 00:11:10.523 cpu : usr=2.69%, sys=4.18%, ctx=362, majf=0, minf=1 00:11:10.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:10.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.523 issued rwts: total=2933,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.523 job3: (groupid=0, jobs=1): err= 0: pid=156611: Fri Oct 11 22:34:13 2024 00:11:10.523 read: IOPS=4390, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1006msec) 00:11:10.523 slat (usec): min=2, max=11256, avg=119.86, stdev=749.95 00:11:10.523 clat (usec): min=783, max=29841, avg=15210.39, stdev=5690.63 00:11:10.523 lat (usec): min=4800, max=29848, avg=15330.26, stdev=5734.12 00:11:10.523 clat percentiles (usec): 00:11:10.523 | 1.00th=[ 6587], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[11469], 00:11:10.523 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13960], 00:11:10.523 | 70.00th=[15270], 80.00th=[20055], 90.00th=[26346], 95.00th=[27395], 00:11:10.523 | 99.00th=[28967], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:11:10.523 | 99.99th=[29754] 00:11:10.523 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:11:10.523 slat (usec): min=3, max=10535, avg=91.56, stdev=564.94 00:11:10.523 clat (usec): min=1753, max=27000, avg=13112.08, stdev=4124.20 00:11:10.523 lat (usec): min=1761, max=27007, avg=13203.64, stdev=4159.45 00:11:10.523 clat percentiles (usec): 00:11:10.523 | 1.00th=[ 4686], 5.00th=[ 6849], 10.00th=[ 8848], 20.00th=[11076], 00:11:10.523 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:11:10.523 | 70.00th=[13042], 80.00th=[13698], 90.00th=[19268], 95.00th=[22938], 00:11:10.523 | 99.00th=[26608], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:11:10.523 | 99.99th=[27132] 00:11:10.523 bw ( KiB/s): min=16384, max=20480, per=28.25%, avg=18432.00, stdev=2896.31, samples=2 00:11:10.523 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:11:10.523 lat (usec) : 1000=0.01% 00:11:10.523 lat (msec) : 2=0.08%, 4=0.11%, 10=11.41%, 20=73.81%, 50=14.58% 00:11:10.523 cpu : usr=2.79%, sys=7.66%, ctx=447, majf=0, minf=1 00:11:10.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:10.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.523 issued rwts: total=4417,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.523 00:11:10.523 Run status group 0 (all jobs): 00:11:10.523 READ: bw=61.8MiB/s (64.8MB/s), 11.4MiB/s-17.5MiB/s (11.9MB/s-18.3MB/s), io=62.2MiB (65.2MB), run=1002-1006msec 00:11:10.523 WRITE: bw=63.7MiB/s (66.8MB/s), 11.9MiB/s-18.0MiB/s (12.5MB/s-18.8MB/s), io=64.1MiB (67.2MB), run=1002-1006msec 00:11:10.523 00:11:10.523 Disk stats (read/write): 00:11:10.523 nvme0n1: ios=3123/3407, merge=0/0, ticks=45057/59138, in_queue=104195, util=98.00% 00:11:10.523 nvme0n2: ios=3615/3840, merge=0/0, ticks=20513/16716, in_queue=37229, util=98.78% 00:11:10.523 nvme0n3: ios=2560/2679, merge=0/0, ticks=28239/30851, in_queue=59090, util=88.96% 00:11:10.523 nvme0n4: ios=4048/4096, merge=0/0, ticks=44367/39013, in_queue=83380, util=89.51% 00:11:10.523 22:34:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:10.523 22:34:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=156747 00:11:10.523 22:34:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:10.523 22:34:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:10.523 [global] 00:11:10.523 thread=1 00:11:10.523 invalidate=1 00:11:10.523 rw=read 00:11:10.523 time_based=1 00:11:10.523 runtime=10 00:11:10.523 ioengine=libaio 00:11:10.523 direct=1 00:11:10.523 bs=4096 00:11:10.523 iodepth=1 00:11:10.523 norandommap=1 00:11:10.523 numjobs=1 00:11:10.523 00:11:10.523 [job0] 00:11:10.523 filename=/dev/nvme0n1 00:11:10.523 [job1] 00:11:10.523 filename=/dev/nvme0n2 00:11:10.523 [job2] 00:11:10.523 filename=/dev/nvme0n3 00:11:10.523 [job3] 00:11:10.523 filename=/dev/nvme0n4 00:11:10.523 Could not set queue depth (nvme0n1) 00:11:10.523 Could not set queue depth (nvme0n2) 00:11:10.523 Could not set queue depth (nvme0n3) 00:11:10.523 Could not set queue depth (nvme0n4) 00:11:10.523 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.523 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.523 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.523 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.523 fio-3.35 00:11:10.523 Starting 4 threads 00:11:13.802 22:34:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:13.802 22:34:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:13.802 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=344064, buflen=4096 00:11:13.802 fio: pid=156842, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.060 22:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.060 22:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:14.060 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=28667904, buflen=4096 00:11:14.060 fio: pid=156841, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.318 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=7614464, buflen=4096 00:11:14.318 fio: pid=156837, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.318 22:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.318 22:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:14.577 22:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.577 22:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:14.577 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52629504, buflen=4096 00:11:14.577 fio: pid=156838, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.577 00:11:14.577 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156837: Fri Oct 11 22:34:17 2024 00:11:14.577 read: IOPS=525, BW=2102KiB/s (2153kB/s)(7436KiB/3537msec) 00:11:14.577 slat (usec): min=4, max=11893, avg=26.13, stdev=358.88 00:11:14.577 clat (usec): min=175, max=44942, avg=1860.10, stdev=7746.90 00:11:14.577 lat (usec): min=186, max=52997, avg=1886.24, stdev=7815.76 00:11:14.577 clat percentiles (usec): 00:11:14.577 | 1.00th=[ 198], 5.00th=[ 231], 10.00th=[ 249], 20.00th=[ 265], 00:11:14.577 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 330], 00:11:14.577 | 70.00th=[ 351], 80.00th=[ 388], 90.00th=[ 465], 95.00th=[ 529], 00:11:14.577 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[44827], 00:11:14.577 | 99.99th=[44827] 00:11:14.577 bw ( KiB/s): min= 96, max= 5168, per=8.37%, avg=1890.67, stdev=2078.85, samples=6 00:11:14.577 iops : min= 24, max= 1292, avg=472.67, stdev=519.71, samples=6 00:11:14.577 lat (usec) : 250=10.22%, 500=83.17%, 750=2.74% 00:11:14.577 lat (msec) : 20=0.05%, 50=3.76% 00:11:14.577 cpu : usr=0.25%, sys=0.96%, ctx=1862, majf=0, minf=1 00:11:14.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.577 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.577 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.577 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156838: Fri Oct 11 22:34:17 2024 00:11:14.577 read: IOPS=3328, BW=13.0MiB/s (13.6MB/s)(50.2MiB/3861msec) 00:11:14.577 slat (usec): min=3, max=15723, avg=15.62, stdev=223.57 00:11:14.577 clat (usec): min=157, max=42017, avg=280.05, stdev=1402.94 00:11:14.577 lat (usec): min=162, max=42030, avg=295.67, stdev=1421.24 00:11:14.577 clat percentiles (usec): 00:11:14.577 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:11:14.577 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 223], 00:11:14.577 | 70.00th=[ 249], 80.00th=[ 277], 90.00th=[ 310], 95.00th=[ 343], 00:11:14.577 | 99.00th=[ 420], 99.50th=[ 469], 99.90th=[40633], 99.95th=[41157], 00:11:14.577 | 99.99th=[42206] 00:11:14.577 bw ( KiB/s): min= 8768, max=17696, per=62.51%, avg=14113.29, stdev=2855.12, samples=7 00:11:14.577 iops : min= 2192, max= 4424, avg=3528.29, stdev=713.75, samples=7 00:11:14.577 lat (usec) : 250=70.53%, 500=29.22%, 750=0.10% 00:11:14.577 lat (msec) : 2=0.01%, 20=0.02%, 50=0.12% 00:11:14.577 cpu : usr=1.76%, sys=4.07%, ctx=12856, majf=0, minf=2 00:11:14.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.577 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.577 issued rwts: total=12850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.577 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156841: Fri Oct 11 22:34:17 2024 00:11:14.577 read: IOPS=2136, BW=8543KiB/s (8748kB/s)(27.3MiB/3277msec) 00:11:14.577 slat (usec): min=4, max=15695, avg=18.30, stdev=231.23 00:11:14.577 clat (usec): min=181, max=41236, avg=443.29, stdev=2747.18 00:11:14.577 lat (usec): min=189, max=41253, avg=461.59, stdev=2757.12 00:11:14.577 clat percentiles (usec): 00:11:14.577 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 229], 00:11:14.577 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:11:14.577 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 326], 00:11:14.577 | 99.00th=[ 510], 99.50th=[ 775], 99.90th=[41157], 99.95th=[41157], 00:11:14.577 | 99.99th=[41157] 00:11:14.577 bw ( KiB/s): min= 264, max=15608, per=39.50%, avg=8918.67, stdev=5349.15, samples=6 00:11:14.577 iops : min= 66, max= 3902, avg=2229.67, stdev=1337.29, samples=6 00:11:14.577 lat (usec) : 250=54.06%, 500=44.67%, 750=0.74%, 1000=0.04% 00:11:14.577 lat (msec) : 20=0.01%, 50=0.46% 00:11:14.577 cpu : usr=1.71%, sys=4.12%, ctx=7005, majf=0, minf=1 00:11:14.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.577 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.577 issued rwts: total=7000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.577 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156842: Fri Oct 11 22:34:17 2024 00:11:14.577 read: IOPS=28, BW=114KiB/s (117kB/s)(336KiB/2945msec) 00:11:14.577 slat (nsec): min=7517, max=36700, avg=23312.86, stdev=9718.49 00:11:14.577 clat (usec): min=265, max=41309, avg=34753.80, stdev=14598.32 00:11:14.577 lat (usec): min=275, max=41320, avg=34777.20, stdev=14601.28 00:11:14.577 clat percentiles (usec): 00:11:14.577 | 1.00th=[ 265], 5.00th=[ 314], 10.00th=[ 429], 20.00th=[40633], 00:11:14.577 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:14.577 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:14.577 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:14.577 | 99.99th=[41157] 00:11:14.577 bw ( KiB/s): min= 96, max= 144, per=0.50%, avg=112.00, stdev=18.76, samples=5 00:11:14.577 iops : min= 24, max= 36, avg=28.00, stdev= 4.69, samples=5 00:11:14.577 lat (usec) : 500=12.94%, 750=1.18% 00:11:14.577 lat (msec) : 10=1.18%, 50=83.53% 00:11:14.577 cpu : usr=0.14%, sys=0.00%, ctx=85, majf=0, minf=2 00:11:14.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.577 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.577 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.577 00:11:14.577 Run status group 0 (all jobs): 00:11:14.577 READ: bw=22.0MiB/s (23.1MB/s), 114KiB/s-13.0MiB/s (117kB/s-13.6MB/s), io=85.1MiB (89.3MB), run=2945-3861msec 00:11:14.577 00:11:14.577 Disk stats (read/write): 00:11:14.577 nvme0n1: ios=1855/0, merge=0/0, ticks=3270/0, in_queue=3270, util=95.65% 00:11:14.577 nvme0n2: ios=12850/0, merge=0/0, ticks=3498/0, in_queue=3498, util=95.05% 00:11:14.577 nvme0n3: ios=6940/0, merge=0/0, ticks=3635/0, in_queue=3635, util=99.16% 00:11:14.577 nvme0n4: ios=82/0, merge=0/0, ticks=2840/0, in_queue=2840, util=96.75% 00:11:14.836 22:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.836 22:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:15.094 22:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.094 22:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:15.352 22:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.352 22:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:15.919 22:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.919 22:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:15.919 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:15.919 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 156747 00:11:15.919 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:15.919 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.177 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.177 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:16.177 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:16.177 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.177 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:16.177 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.177 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:16.177 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:16.177 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:16.177 nvmf hotplug test: fio failed as expected 00:11:16.177 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.436 rmmod nvme_tcp 00:11:16.436 rmmod nvme_fabrics 00:11:16.436 rmmod nvme_keyring 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 154709 ']' 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 154709 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 154709 ']' 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 154709 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 154709 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 154709' 00:11:16.436 killing process with pid 154709 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 154709 00:11:16.436 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 154709 00:11:16.695 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:16.695 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:16.696 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:16.696 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:16.696 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:11:16.696 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:16.696 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:11:16.696 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.696 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.696 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.696 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.696 22:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.237 22:34:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.237 00:11:19.237 real 0m24.273s 00:11:19.237 user 1m25.480s 00:11:19.237 sys 0m6.923s 00:11:19.237 22:34:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.237 22:34:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.237 ************************************ 00:11:19.237 END TEST nvmf_fio_target 00:11:19.237 ************************************ 00:11:19.237 22:34:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:19.237 22:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:19.237 22:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.237 22:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:19.237 ************************************ 00:11:19.237 START TEST nvmf_bdevio 00:11:19.237 ************************************ 00:11:19.237 22:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:19.237 * Looking for test storage... 00:11:19.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:19.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.237 --rc genhtml_branch_coverage=1 00:11:19.237 --rc genhtml_function_coverage=1 00:11:19.237 --rc genhtml_legend=1 00:11:19.237 --rc geninfo_all_blocks=1 00:11:19.237 --rc geninfo_unexecuted_blocks=1 00:11:19.237 00:11:19.237 ' 00:11:19.237 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:19.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.237 --rc genhtml_branch_coverage=1 00:11:19.237 --rc genhtml_function_coverage=1 00:11:19.237 --rc genhtml_legend=1 00:11:19.238 --rc geninfo_all_blocks=1 00:11:19.238 --rc geninfo_unexecuted_blocks=1 00:11:19.238 00:11:19.238 ' 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:19.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.238 --rc genhtml_branch_coverage=1 00:11:19.238 --rc genhtml_function_coverage=1 00:11:19.238 --rc genhtml_legend=1 00:11:19.238 --rc geninfo_all_blocks=1 00:11:19.238 --rc geninfo_unexecuted_blocks=1 00:11:19.238 00:11:19.238 ' 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:19.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.238 --rc genhtml_branch_coverage=1 00:11:19.238 --rc genhtml_function_coverage=1 00:11:19.238 --rc genhtml_legend=1 00:11:19.238 --rc geninfo_all_blocks=1 00:11:19.238 --rc geninfo_unexecuted_blocks=1 00:11:19.238 00:11:19.238 ' 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.238 22:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:21.149 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:21.149 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:21.149 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:21.149 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.149 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.408 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.408 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.408 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.408 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:11:21.409 00:11:21.409 --- 10.0.0.2 ping statistics --- 00:11:21.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.409 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:11:21.409 00:11:21.409 --- 10.0.0.1 ping statistics --- 00:11:21.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.409 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=159481 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 159481 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 159481 ']' 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.409 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 [2024-10-11 22:34:24.552767] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:11:21.409 [2024-10-11 22:34:24.552869] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.409 [2024-10-11 22:34:24.617114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.409 [2024-10-11 22:34:24.666610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.409 [2024-10-11 22:34:24.666659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.409 [2024-10-11 22:34:24.666687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.409 [2024-10-11 22:34:24.666698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.409 [2024-10-11 22:34:24.666709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.409 [2024-10-11 22:34:24.668244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:21.409 [2024-10-11 22:34:24.668310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:21.409 [2024-10-11 22:34:24.668378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.409 [2024-10-11 22:34:24.668375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.668 [2024-10-11 22:34:24.822945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.668 Malloc0 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.668 [2024-10-11 22:34:24.883475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:21.668 { 00:11:21.668 "params": { 00:11:21.668 "name": "Nvme$subsystem", 00:11:21.668 "trtype": "$TEST_TRANSPORT", 00:11:21.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:21.668 "adrfam": "ipv4", 00:11:21.668 "trsvcid": "$NVMF_PORT", 00:11:21.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:21.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:21.668 "hdgst": ${hdgst:-false}, 00:11:21.668 "ddgst": ${ddgst:-false} 00:11:21.668 }, 00:11:21.668 "method": "bdev_nvme_attach_controller" 00:11:21.668 } 00:11:21.668 EOF 00:11:21.668 )") 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:11:21.668 22:34:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:21.668 "params": { 00:11:21.668 "name": "Nvme1", 00:11:21.668 "trtype": "tcp", 00:11:21.668 "traddr": "10.0.0.2", 00:11:21.668 "adrfam": "ipv4", 00:11:21.668 "trsvcid": "4420", 00:11:21.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:21.668 "hdgst": false, 00:11:21.668 "ddgst": false 00:11:21.668 }, 00:11:21.668 "method": "bdev_nvme_attach_controller" 00:11:21.668 }' 00:11:21.668 [2024-10-11 22:34:24.934259] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:11:21.668 [2024-10-11 22:34:24.934327] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159624 ] 00:11:21.926 [2024-10-11 22:34:24.996247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.926 [2024-10-11 22:34:25.048559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.926 [2024-10-11 22:34:25.048603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.926 [2024-10-11 22:34:25.048607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.184 I/O targets: 00:11:22.185 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:22.185 00:11:22.185 00:11:22.185 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.185 http://cunit.sourceforge.net/ 00:11:22.185 00:11:22.185 00:11:22.185 Suite: bdevio tests on: Nvme1n1 00:11:22.185 Test: blockdev write read block ...passed 00:11:22.185 Test: blockdev write zeroes read block ...passed 00:11:22.185 Test: blockdev write zeroes read no split ...passed 00:11:22.185 Test: blockdev write zeroes read split ...passed 00:11:22.185 Test: blockdev write zeroes read split partial ...passed 00:11:22.185 Test: blockdev reset ...[2024-10-11 22:34:25.348434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:22.185 [2024-10-11 22:34:25.348545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2495b80 (9): Bad file descriptor 00:11:22.185 [2024-10-11 22:34:25.364987] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:22.185 passed 00:11:22.185 Test: blockdev write read 8 blocks ...passed 00:11:22.185 Test: blockdev write read size > 128k ...passed 00:11:22.185 Test: blockdev write read invalid size ...passed 00:11:22.185 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.185 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.185 Test: blockdev write read max offset ...passed 00:11:22.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.443 Test: blockdev writev readv 8 blocks ...passed 00:11:22.443 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.443 Test: blockdev writev readv block ...passed 00:11:22.443 Test: blockdev writev readv size > 128k ...passed 00:11:22.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.443 Test: blockdev comparev and writev ...[2024-10-11 22:34:25.575668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.443 [2024-10-11 22:34:25.575703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:22.443 [2024-10-11 22:34:25.575728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.443 [2024-10-11 22:34:25.575747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:22.443 [2024-10-11 22:34:25.576045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.443 [2024-10-11 22:34:25.576070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:22.443 [2024-10-11 22:34:25.576093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.443 [2024-10-11 22:34:25.576110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:22.443 [2024-10-11 22:34:25.576413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.443 [2024-10-11 22:34:25.576437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:22.443 [2024-10-11 22:34:25.576459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.443 [2024-10-11 22:34:25.576475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:22.443 [2024-10-11 22:34:25.576792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.443 [2024-10-11 22:34:25.576817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:22.443 [2024-10-11 22:34:25.576839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.443 [2024-10-11 22:34:25.576863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:22.443 passed 00:11:22.443 Test: blockdev nvme passthru rw ...passed 00:11:22.443 Test: blockdev nvme passthru vendor specific ...[2024-10-11 22:34:25.659801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.443 [2024-10-11 22:34:25.659829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:22.443 [2024-10-11 22:34:25.659971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.443 [2024-10-11 22:34:25.659996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:22.443 [2024-10-11 22:34:25.660139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.443 [2024-10-11 22:34:25.660163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:22.443 [2024-10-11 22:34:25.660308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.443 [2024-10-11 22:34:25.660332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:22.443 passed 00:11:22.443 Test: blockdev nvme admin passthru ...passed 00:11:22.701 Test: blockdev copy ...passed 00:11:22.701 00:11:22.701 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.701 suites 1 1 n/a 0 0 00:11:22.701 tests 23 23 23 0 0 00:11:22.701 asserts 152 152 152 0 n/a 00:11:22.701 00:11:22.701 Elapsed time = 0.969 seconds 00:11:22.701 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.701 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.701 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.701 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.701 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:22.701 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:22.701 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:22.701 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:22.701 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:22.701 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:22.701 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:22.701 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:22.701 rmmod nvme_tcp 00:11:22.701 rmmod nvme_fabrics 00:11:22.702 rmmod nvme_keyring 00:11:22.702 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:22.702 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:22.702 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:22.702 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 159481 ']' 00:11:22.702 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 159481 00:11:22.702 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 159481 ']' 00:11:22.702 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 159481 00:11:22.702 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:22.702 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:22.702 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 159481 00:11:22.960 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:22.960 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:22.960 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 159481' 00:11:22.960 killing process with pid 159481 00:11:22.960 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 159481 00:11:22.960 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 159481 00:11:22.960 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:22.960 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:22.960 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:22.960 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:22.960 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:11:22.960 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:22.960 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:11:22.960 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:22.960 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:22.960 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.960 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.960 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.978 22:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:24.978 00:11:24.978 real 0m6.241s 00:11:24.978 user 0m8.844s 00:11:24.978 sys 0m2.121s 00:11:24.978 22:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.978 22:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.978 ************************************ 00:11:24.978 END TEST nvmf_bdevio 00:11:24.978 ************************************ 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:25.237 00:11:25.237 real 3m55.805s 00:11:25.237 user 10m14.938s 00:11:25.237 sys 1m7.032s 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:25.237 ************************************ 00:11:25.237 END TEST nvmf_target_core 00:11:25.237 ************************************ 00:11:25.237 22:34:28 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:25.237 22:34:28 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:25.237 22:34:28 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.237 22:34:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:25.237 ************************************ 00:11:25.237 START TEST nvmf_target_extra 00:11:25.237 ************************************ 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:25.237 * Looking for test storage... 00:11:25.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:25.237 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:25.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.238 --rc genhtml_branch_coverage=1 00:11:25.238 --rc genhtml_function_coverage=1 00:11:25.238 --rc genhtml_legend=1 00:11:25.238 --rc geninfo_all_blocks=1 00:11:25.238 --rc geninfo_unexecuted_blocks=1 00:11:25.238 00:11:25.238 ' 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:25.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.238 --rc genhtml_branch_coverage=1 00:11:25.238 --rc genhtml_function_coverage=1 00:11:25.238 --rc genhtml_legend=1 00:11:25.238 --rc geninfo_all_blocks=1 00:11:25.238 --rc geninfo_unexecuted_blocks=1 00:11:25.238 00:11:25.238 ' 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:25.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.238 --rc genhtml_branch_coverage=1 00:11:25.238 --rc genhtml_function_coverage=1 00:11:25.238 --rc genhtml_legend=1 00:11:25.238 --rc geninfo_all_blocks=1 00:11:25.238 --rc geninfo_unexecuted_blocks=1 00:11:25.238 00:11:25.238 ' 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:25.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.238 --rc genhtml_branch_coverage=1 00:11:25.238 --rc genhtml_function_coverage=1 00:11:25.238 --rc genhtml_legend=1 00:11:25.238 --rc geninfo_all_blocks=1 00:11:25.238 --rc geninfo_unexecuted_blocks=1 00:11:25.238 00:11:25.238 ' 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:25.238 ************************************ 00:11:25.238 START TEST nvmf_example 00:11:25.238 ************************************ 00:11:25.238 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:25.498 * Looking for test storage... 00:11:25.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:25.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.498 --rc genhtml_branch_coverage=1 00:11:25.498 --rc genhtml_function_coverage=1 00:11:25.498 --rc genhtml_legend=1 00:11:25.498 --rc geninfo_all_blocks=1 00:11:25.498 --rc geninfo_unexecuted_blocks=1 00:11:25.498 00:11:25.498 ' 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:25.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.498 --rc genhtml_branch_coverage=1 00:11:25.498 --rc genhtml_function_coverage=1 00:11:25.498 --rc genhtml_legend=1 00:11:25.498 --rc geninfo_all_blocks=1 00:11:25.498 --rc geninfo_unexecuted_blocks=1 00:11:25.498 00:11:25.498 ' 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:25.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.498 --rc genhtml_branch_coverage=1 00:11:25.498 --rc genhtml_function_coverage=1 00:11:25.498 --rc genhtml_legend=1 00:11:25.498 --rc geninfo_all_blocks=1 00:11:25.498 --rc geninfo_unexecuted_blocks=1 00:11:25.498 00:11:25.498 ' 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:25.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.498 --rc genhtml_branch_coverage=1 00:11:25.498 --rc genhtml_function_coverage=1 00:11:25.498 --rc genhtml_legend=1 00:11:25.498 --rc geninfo_all_blocks=1 00:11:25.498 --rc geninfo_unexecuted_blocks=1 00:11:25.498 00:11:25.498 ' 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.498 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:25.499 22:34:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:28.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:28.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:28.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:28.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:28.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:11:28.036 00:11:28.036 --- 10.0.0.2 ping statistics --- 00:11:28.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.036 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:11:28.036 00:11:28.036 --- 10.0.0.1 ping statistics --- 00:11:28.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.036 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.036 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=161760 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 161760 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 161760 ']' 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.037 22:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:28.037 22:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:40.239 Initializing NVMe Controllers 00:11:40.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:40.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:40.239 Initialization complete. Launching workers. 00:11:40.239 ======================================================== 00:11:40.239 Latency(us) 00:11:40.239 Device Information : IOPS MiB/s Average min max 00:11:40.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14947.40 58.39 4281.69 906.07 16444.75 00:11:40.239 ======================================================== 00:11:40.239 Total : 14947.40 58.39 4281.69 906.07 16444.75 00:11:40.239 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.239 rmmod nvme_tcp 00:11:40.239 rmmod nvme_fabrics 00:11:40.239 rmmod nvme_keyring 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 161760 ']' 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 161760 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 161760 ']' 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 161760 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 161760 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 161760' 00:11:40.239 killing process with pid 161760 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 161760 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 161760 00:11:40.239 nvmf threads initialize successfully 00:11:40.239 bdev subsystem init successfully 00:11:40.239 created a nvmf target service 00:11:40.239 create targets's poll groups done 00:11:40.239 all subsystems of target started 00:11:40.239 nvmf target is running 00:11:40.239 all subsystems of target stopped 00:11:40.239 destroy targets's poll groups done 00:11:40.239 destroyed the nvmf target service 00:11:40.239 bdev subsystem finish successfully 00:11:40.239 nvmf threads destroy successfully 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:40.239 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:40.240 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:40.240 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:40.240 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:11:40.240 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:11:40.240 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.240 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.240 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.240 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.240 22:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.810 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.810 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:40.810 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:40.810 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:40.810 00:11:40.810 real 0m15.468s 00:11:40.810 user 0m42.607s 00:11:40.810 sys 0m3.349s 00:11:40.810 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.810 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:40.810 ************************************ 00:11:40.810 END TEST nvmf_example 00:11:40.810 ************************************ 00:11:40.810 22:34:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:40.810 22:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:40.810 22:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.810 22:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:40.810 ************************************ 00:11:40.810 START TEST nvmf_filesystem 00:11:40.810 ************************************ 00:11:40.810 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:40.810 * Looking for test storage... 00:11:40.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.810 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:40.810 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:40.810 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:41.073 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:41.073 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.073 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.073 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.073 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.073 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.073 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.073 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:41.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.074 --rc genhtml_branch_coverage=1 00:11:41.074 --rc genhtml_function_coverage=1 00:11:41.074 --rc genhtml_legend=1 00:11:41.074 --rc geninfo_all_blocks=1 00:11:41.074 --rc geninfo_unexecuted_blocks=1 00:11:41.074 00:11:41.074 ' 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:41.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.074 --rc genhtml_branch_coverage=1 00:11:41.074 --rc genhtml_function_coverage=1 00:11:41.074 --rc genhtml_legend=1 00:11:41.074 --rc geninfo_all_blocks=1 00:11:41.074 --rc geninfo_unexecuted_blocks=1 00:11:41.074 00:11:41.074 ' 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:41.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.074 --rc genhtml_branch_coverage=1 00:11:41.074 --rc genhtml_function_coverage=1 00:11:41.074 --rc genhtml_legend=1 00:11:41.074 --rc geninfo_all_blocks=1 00:11:41.074 --rc geninfo_unexecuted_blocks=1 00:11:41.074 00:11:41.074 ' 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:41.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.074 --rc genhtml_branch_coverage=1 00:11:41.074 --rc genhtml_function_coverage=1 00:11:41.074 --rc genhtml_legend=1 00:11:41.074 --rc geninfo_all_blocks=1 00:11:41.074 --rc geninfo_unexecuted_blocks=1 00:11:41.074 00:11:41.074 ' 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:41.074 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:41.075 #define SPDK_CONFIG_H 00:11:41.075 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:41.075 #define SPDK_CONFIG_APPS 1 00:11:41.075 #define SPDK_CONFIG_ARCH native 00:11:41.075 #undef SPDK_CONFIG_ASAN 00:11:41.075 #undef SPDK_CONFIG_AVAHI 00:11:41.075 #undef SPDK_CONFIG_CET 00:11:41.075 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:41.075 #define SPDK_CONFIG_COVERAGE 1 00:11:41.075 #define SPDK_CONFIG_CROSS_PREFIX 00:11:41.075 #undef SPDK_CONFIG_CRYPTO 00:11:41.075 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:41.075 #undef SPDK_CONFIG_CUSTOMOCF 00:11:41.075 #undef SPDK_CONFIG_DAOS 00:11:41.075 #define SPDK_CONFIG_DAOS_DIR 00:11:41.075 #define SPDK_CONFIG_DEBUG 1 00:11:41.075 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:41.075 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:41.075 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:41.075 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:41.075 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:41.075 #undef SPDK_CONFIG_DPDK_UADK 00:11:41.075 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:41.075 #define SPDK_CONFIG_EXAMPLES 1 00:11:41.075 #undef SPDK_CONFIG_FC 00:11:41.075 #define SPDK_CONFIG_FC_PATH 00:11:41.075 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:41.075 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:41.075 #define SPDK_CONFIG_FSDEV 1 00:11:41.075 #undef SPDK_CONFIG_FUSE 00:11:41.075 #undef SPDK_CONFIG_FUZZER 00:11:41.075 #define SPDK_CONFIG_FUZZER_LIB 00:11:41.075 #undef SPDK_CONFIG_GOLANG 00:11:41.075 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:41.075 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:41.075 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:41.075 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:41.075 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:41.075 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:41.075 #undef SPDK_CONFIG_HAVE_LZ4 00:11:41.075 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:41.075 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:41.075 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:41.075 #define SPDK_CONFIG_IDXD 1 00:11:41.075 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:41.075 #undef SPDK_CONFIG_IPSEC_MB 00:11:41.075 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:41.075 #define SPDK_CONFIG_ISAL 1 00:11:41.075 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:41.075 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:41.075 #define SPDK_CONFIG_LIBDIR 00:11:41.075 #undef SPDK_CONFIG_LTO 00:11:41.075 #define SPDK_CONFIG_MAX_LCORES 128 00:11:41.075 #define SPDK_CONFIG_NVME_CUSE 1 00:11:41.075 #undef SPDK_CONFIG_OCF 00:11:41.075 #define SPDK_CONFIG_OCF_PATH 00:11:41.075 #define SPDK_CONFIG_OPENSSL_PATH 00:11:41.075 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:41.075 #define SPDK_CONFIG_PGO_DIR 00:11:41.075 #undef SPDK_CONFIG_PGO_USE 00:11:41.075 #define SPDK_CONFIG_PREFIX /usr/local 00:11:41.075 #undef SPDK_CONFIG_RAID5F 00:11:41.075 #undef SPDK_CONFIG_RBD 00:11:41.075 #define SPDK_CONFIG_RDMA 1 00:11:41.075 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:41.075 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:41.075 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:41.075 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:41.075 #define SPDK_CONFIG_SHARED 1 00:11:41.075 #undef SPDK_CONFIG_SMA 00:11:41.075 #define SPDK_CONFIG_TESTS 1 00:11:41.075 #undef SPDK_CONFIG_TSAN 00:11:41.075 #define SPDK_CONFIG_UBLK 1 00:11:41.075 #define SPDK_CONFIG_UBSAN 1 00:11:41.075 #undef SPDK_CONFIG_UNIT_TESTS 00:11:41.075 #undef SPDK_CONFIG_URING 00:11:41.075 #define SPDK_CONFIG_URING_PATH 00:11:41.075 #undef SPDK_CONFIG_URING_ZNS 00:11:41.075 #undef SPDK_CONFIG_USDT 00:11:41.075 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:41.075 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:41.075 #define SPDK_CONFIG_VFIO_USER 1 00:11:41.075 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:41.075 #define SPDK_CONFIG_VHOST 1 00:11:41.075 #define SPDK_CONFIG_VIRTIO 1 00:11:41.075 #undef SPDK_CONFIG_VTUNE 00:11:41.075 #define SPDK_CONFIG_VTUNE_DIR 00:11:41.075 #define SPDK_CONFIG_WERROR 1 00:11:41.075 #define SPDK_CONFIG_WPDK_DIR 00:11:41.075 #undef SPDK_CONFIG_XNVME 00:11:41.075 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.075 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:41.076 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:41.077 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 163406 ]] 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 163406 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.Xz17c6 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Xz17c6/tests/target /tmp/spdk.Xz17c6 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=53957455872 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988519936 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=8031064064 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30984228864 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994259968 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375277568 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22429696 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30993948672 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994259968 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=311296 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:41.078 * Looking for test storage... 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=53957455872 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:41.078 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=10245656576 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:41.079 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:41.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.340 --rc genhtml_branch_coverage=1 00:11:41.340 --rc genhtml_function_coverage=1 00:11:41.340 --rc genhtml_legend=1 00:11:41.340 --rc geninfo_all_blocks=1 00:11:41.340 --rc geninfo_unexecuted_blocks=1 00:11:41.340 00:11:41.340 ' 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:41.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.340 --rc genhtml_branch_coverage=1 00:11:41.340 --rc genhtml_function_coverage=1 00:11:41.340 --rc genhtml_legend=1 00:11:41.340 --rc geninfo_all_blocks=1 00:11:41.340 --rc geninfo_unexecuted_blocks=1 00:11:41.340 00:11:41.340 ' 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:41.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.340 --rc genhtml_branch_coverage=1 00:11:41.340 --rc genhtml_function_coverage=1 00:11:41.340 --rc genhtml_legend=1 00:11:41.340 --rc geninfo_all_blocks=1 00:11:41.340 --rc geninfo_unexecuted_blocks=1 00:11:41.340 00:11:41.340 ' 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:41.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.340 --rc genhtml_branch_coverage=1 00:11:41.340 --rc genhtml_function_coverage=1 00:11:41.340 --rc genhtml_legend=1 00:11:41.340 --rc geninfo_all_blocks=1 00:11:41.340 --rc geninfo_unexecuted_blocks=1 00:11:41.340 00:11:41.340 ' 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.340 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.341 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:43.879 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:43.879 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:43.879 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:43.879 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.879 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:11:43.880 00:11:43.880 --- 10.0.0.2 ping statistics --- 00:11:43.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.880 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:11:43.880 00:11:43.880 --- 10.0.0.1 ping statistics --- 00:11:43.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.880 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.880 ************************************ 00:11:43.880 START TEST nvmf_filesystem_no_in_capsule 00:11:43.880 ************************************ 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=165101 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 165101 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 165101 ']' 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.880 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.880 [2024-10-11 22:34:46.820276] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:11:43.880 [2024-10-11 22:34:46.820368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.880 [2024-10-11 22:34:46.886784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.880 [2024-10-11 22:34:46.937611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.880 [2024-10-11 22:34:46.937681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.880 [2024-10-11 22:34:46.937710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.880 [2024-10-11 22:34:46.937721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.880 [2024-10-11 22:34:46.937731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.880 [2024-10-11 22:34:46.939354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.880 [2024-10-11 22:34:46.939418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.880 [2024-10-11 22:34:46.939441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.880 [2024-10-11 22:34:46.939444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.880 [2024-10-11 22:34:47.088624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.880 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.139 Malloc1 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.139 [2024-10-11 22:34:47.272358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.139 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:44.139 { 00:11:44.139 "name": "Malloc1", 00:11:44.139 "aliases": [ 00:11:44.139 "30071600-b1a2-4bab-9fe0-e47b8b6eb6c5" 00:11:44.139 ], 00:11:44.139 "product_name": "Malloc disk", 00:11:44.139 "block_size": 512, 00:11:44.139 "num_blocks": 1048576, 00:11:44.139 "uuid": "30071600-b1a2-4bab-9fe0-e47b8b6eb6c5", 00:11:44.139 "assigned_rate_limits": { 00:11:44.139 "rw_ios_per_sec": 0, 00:11:44.139 "rw_mbytes_per_sec": 0, 00:11:44.139 "r_mbytes_per_sec": 0, 00:11:44.139 "w_mbytes_per_sec": 0 00:11:44.139 }, 00:11:44.139 "claimed": true, 00:11:44.139 "claim_type": "exclusive_write", 00:11:44.139 "zoned": false, 00:11:44.139 "supported_io_types": { 00:11:44.139 "read": true, 00:11:44.139 "write": true, 00:11:44.139 "unmap": true, 00:11:44.139 "flush": true, 00:11:44.139 "reset": true, 00:11:44.139 "nvme_admin": false, 00:11:44.139 "nvme_io": false, 00:11:44.139 "nvme_io_md": false, 00:11:44.139 "write_zeroes": true, 00:11:44.139 "zcopy": true, 00:11:44.139 "get_zone_info": false, 00:11:44.139 "zone_management": false, 00:11:44.139 "zone_append": false, 00:11:44.139 "compare": false, 00:11:44.139 "compare_and_write": false, 00:11:44.139 "abort": true, 00:11:44.139 "seek_hole": false, 00:11:44.139 "seek_data": false, 00:11:44.139 "copy": true, 00:11:44.139 "nvme_iov_md": false 00:11:44.139 }, 00:11:44.139 "memory_domains": [ 00:11:44.139 { 00:11:44.140 "dma_device_id": "system", 00:11:44.140 "dma_device_type": 1 00:11:44.140 }, 00:11:44.140 { 00:11:44.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.140 "dma_device_type": 2 00:11:44.140 } 00:11:44.140 ], 00:11:44.140 "driver_specific": {} 00:11:44.140 } 00:11:44.140 ]' 00:11:44.140 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:44.140 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:44.140 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:44.140 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:44.140 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:44.140 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:44.140 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:44.140 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.073 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.073 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:45.073 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.073 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:45.073 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:46.973 22:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:46.973 22:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:46.973 22:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.973 22:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:46.973 22:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.974 22:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:46.974 22:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:46.974 22:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:46.974 22:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:46.974 22:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:46.974 22:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:46.974 22:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:46.974 22:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:46.974 22:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:46.974 22:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:46.974 22:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:46.974 22:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:47.232 22:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:47.797 22:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:48.731 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:48.731 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:48.731 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:48.731 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.731 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.989 ************************************ 00:11:48.989 START TEST filesystem_ext4 00:11:48.989 ************************************ 00:11:48.989 22:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:48.989 22:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:48.989 22:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.989 22:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:48.989 22:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:48.989 22:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:48.989 22:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:48.989 22:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:48.989 22:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:48.989 22:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:48.989 22:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:48.989 mke2fs 1.47.0 (5-Feb-2023) 00:11:48.989 Discarding device blocks: 0/522240 done 00:11:48.989 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:48.989 Filesystem UUID: ddd0c1b3-f067-4ec3-8fa8-6a37bf0d909e 00:11:48.989 Superblock backups stored on blocks: 00:11:48.989 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:48.989 00:11:48.989 Allocating group tables: 0/64 done 00:11:48.989 Writing inode tables: 0/64 done 00:11:49.505 Creating journal (8192 blocks): done 00:11:51.809 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:11:51.809 00:11:51.809 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:51.809 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.070 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.070 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:57.070 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.328 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:57.328 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:57.328 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.328 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 165101 00:11:57.328 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.328 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.328 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.328 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.328 00:11:57.328 real 0m8.372s 00:11:57.328 user 0m0.016s 00:11:57.328 sys 0m0.111s 00:11:57.328 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.328 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:57.328 ************************************ 00:11:57.328 END TEST filesystem_ext4 00:11:57.328 ************************************ 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.329 ************************************ 00:11:57.329 START TEST filesystem_btrfs 00:11:57.329 ************************************ 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:57.329 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:57.586 btrfs-progs v6.8.1 00:11:57.586 See https://btrfs.readthedocs.io for more information. 00:11:57.586 00:11:57.586 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:57.586 NOTE: several default settings have changed in version 5.15, please make sure 00:11:57.586 this does not affect your deployments: 00:11:57.587 - DUP for metadata (-m dup) 00:11:57.587 - enabled no-holes (-O no-holes) 00:11:57.587 - enabled free-space-tree (-R free-space-tree) 00:11:57.587 00:11:57.587 Label: (null) 00:11:57.587 UUID: c33c95f2-fef2-4ba7-a76c-58b33f61ee2e 00:11:57.587 Node size: 16384 00:11:57.587 Sector size: 4096 (CPU page size: 4096) 00:11:57.587 Filesystem size: 510.00MiB 00:11:57.587 Block group profiles: 00:11:57.587 Data: single 8.00MiB 00:11:57.587 Metadata: DUP 32.00MiB 00:11:57.587 System: DUP 8.00MiB 00:11:57.587 SSD detected: yes 00:11:57.587 Zoned device: no 00:11:57.587 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:57.587 Checksum: crc32c 00:11:57.587 Number of devices: 1 00:11:57.587 Devices: 00:11:57.587 ID SIZE PATH 00:11:57.587 1 510.00MiB /dev/nvme0n1p1 00:11:57.587 00:11:57.587 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:57.587 22:35:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.844 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.844 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:57.844 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.844 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:57.844 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:57.844 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 165101 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.103 00:11:58.103 real 0m0.715s 00:11:58.103 user 0m0.024s 00:11:58.103 sys 0m0.135s 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:58.103 ************************************ 00:11:58.103 END TEST filesystem_btrfs 00:11:58.103 ************************************ 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.103 ************************************ 00:11:58.103 START TEST filesystem_xfs 00:11:58.103 ************************************ 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:58.103 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:58.103 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:58.103 = sectsz=512 attr=2, projid32bit=1 00:11:58.103 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:58.103 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:58.103 data = bsize=4096 blocks=130560, imaxpct=25 00:11:58.103 = sunit=0 swidth=0 blks 00:11:58.103 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:58.103 log =internal log bsize=4096 blocks=16384, version=2 00:11:58.103 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:58.103 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:59.037 Discarding blocks...Done. 00:11:59.037 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:59.037 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.565 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.565 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:01.565 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.565 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:01.565 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:01.565 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.565 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 165101 00:12:01.565 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.565 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.565 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.565 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.565 00:12:01.565 real 0m3.364s 00:12:01.565 user 0m0.018s 00:12:01.565 sys 0m0.094s 00:12:01.565 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.566 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:01.566 ************************************ 00:12:01.566 END TEST filesystem_xfs 00:12:01.566 ************************************ 00:12:01.566 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:01.824 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:01.824 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.824 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.824 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:01.824 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:01.824 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 165101 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 165101 ']' 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 165101 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 165101 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 165101' 00:12:01.825 killing process with pid 165101 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 165101 00:12:01.825 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 165101 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:02.393 00:12:02.393 real 0m18.644s 00:12:02.393 user 1m12.317s 00:12:02.393 sys 0m2.462s 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.393 ************************************ 00:12:02.393 END TEST nvmf_filesystem_no_in_capsule 00:12:02.393 ************************************ 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.393 ************************************ 00:12:02.393 START TEST nvmf_filesystem_in_capsule 00:12:02.393 ************************************ 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=167473 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 167473 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 167473 ']' 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:02.393 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.393 [2024-10-11 22:35:05.515360] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:12:02.393 [2024-10-11 22:35:05.515451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.393 [2024-10-11 22:35:05.578677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.393 [2024-10-11 22:35:05.620783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.393 [2024-10-11 22:35:05.620865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.393 [2024-10-11 22:35:05.620879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.393 [2024-10-11 22:35:05.620914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.393 [2024-10-11 22:35:05.620923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.393 [2024-10-11 22:35:05.622381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.393 [2024-10-11 22:35:05.622490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.393 [2024-10-11 22:35:05.622596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.393 [2024-10-11 22:35:05.622600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.652 [2024-10-11 22:35:05.780424] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.652 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.910 Malloc1 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.910 [2024-10-11 22:35:05.965808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:02.910 { 00:12:02.910 "name": "Malloc1", 00:12:02.910 "aliases": [ 00:12:02.910 "c49c0f0a-5b70-4f30-8f6b-271c5f1bb684" 00:12:02.910 ], 00:12:02.910 "product_name": "Malloc disk", 00:12:02.910 "block_size": 512, 00:12:02.910 "num_blocks": 1048576, 00:12:02.910 "uuid": "c49c0f0a-5b70-4f30-8f6b-271c5f1bb684", 00:12:02.910 "assigned_rate_limits": { 00:12:02.910 "rw_ios_per_sec": 0, 00:12:02.910 "rw_mbytes_per_sec": 0, 00:12:02.910 "r_mbytes_per_sec": 0, 00:12:02.910 "w_mbytes_per_sec": 0 00:12:02.910 }, 00:12:02.910 "claimed": true, 00:12:02.910 "claim_type": "exclusive_write", 00:12:02.910 "zoned": false, 00:12:02.910 "supported_io_types": { 00:12:02.910 "read": true, 00:12:02.910 "write": true, 00:12:02.910 "unmap": true, 00:12:02.910 "flush": true, 00:12:02.910 "reset": true, 00:12:02.910 "nvme_admin": false, 00:12:02.910 "nvme_io": false, 00:12:02.910 "nvme_io_md": false, 00:12:02.910 "write_zeroes": true, 00:12:02.910 "zcopy": true, 00:12:02.910 "get_zone_info": false, 00:12:02.910 "zone_management": false, 00:12:02.910 "zone_append": false, 00:12:02.910 "compare": false, 00:12:02.910 "compare_and_write": false, 00:12:02.910 "abort": true, 00:12:02.910 "seek_hole": false, 00:12:02.910 "seek_data": false, 00:12:02.910 "copy": true, 00:12:02.910 "nvme_iov_md": false 00:12:02.910 }, 00:12:02.910 "memory_domains": [ 00:12:02.910 { 00:12:02.910 "dma_device_id": "system", 00:12:02.910 "dma_device_type": 1 00:12:02.910 }, 00:12:02.910 { 00:12:02.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.910 "dma_device_type": 2 00:12:02.910 } 00:12:02.910 ], 00:12:02.910 "driver_specific": {} 00:12:02.910 } 00:12:02.910 ]' 00:12:02.910 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:02.910 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:02.910 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:02.910 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:02.910 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:02.910 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:02.910 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:02.910 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.476 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.476 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:03.476 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.476 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:03.476 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:06.004 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:06.572 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.506 ************************************ 00:12:07.506 START TEST filesystem_in_capsule_ext4 00:12:07.506 ************************************ 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:07.506 22:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:07.506 mke2fs 1.47.0 (5-Feb-2023) 00:12:07.764 Discarding device blocks: 0/522240 done 00:12:07.764 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:07.764 Filesystem UUID: f41b3b35-ac66-49f2-92a3-592784fb8d49 00:12:07.764 Superblock backups stored on blocks: 00:12:07.764 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:07.764 00:12:07.764 Allocating group tables: 0/64 done 00:12:07.764 Writing inode tables: 0/64 done 00:12:07.764 Creating journal (8192 blocks): done 00:12:10.070 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:12:10.070 00:12:10.070 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:10.070 22:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 167473 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:15.333 00:12:15.333 real 0m7.818s 00:12:15.333 user 0m0.027s 00:12:15.333 sys 0m0.057s 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:15.333 ************************************ 00:12:15.333 END TEST filesystem_in_capsule_ext4 00:12:15.333 ************************************ 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.333 ************************************ 00:12:15.333 START TEST filesystem_in_capsule_btrfs 00:12:15.333 ************************************ 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:15.333 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:15.591 btrfs-progs v6.8.1 00:12:15.591 See https://btrfs.readthedocs.io for more information. 00:12:15.591 00:12:15.591 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:15.591 NOTE: several default settings have changed in version 5.15, please make sure 00:12:15.591 this does not affect your deployments: 00:12:15.591 - DUP for metadata (-m dup) 00:12:15.591 - enabled no-holes (-O no-holes) 00:12:15.591 - enabled free-space-tree (-R free-space-tree) 00:12:15.591 00:12:15.591 Label: (null) 00:12:15.591 UUID: f6663ac0-67c5-469f-a7c3-c4a8c637615c 00:12:15.591 Node size: 16384 00:12:15.591 Sector size: 4096 (CPU page size: 4096) 00:12:15.591 Filesystem size: 510.00MiB 00:12:15.591 Block group profiles: 00:12:15.591 Data: single 8.00MiB 00:12:15.591 Metadata: DUP 32.00MiB 00:12:15.591 System: DUP 8.00MiB 00:12:15.591 SSD detected: yes 00:12:15.591 Zoned device: no 00:12:15.591 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:15.591 Checksum: crc32c 00:12:15.591 Number of devices: 1 00:12:15.591 Devices: 00:12:15.591 ID SIZE PATH 00:12:15.591 1 510.00MiB /dev/nvme0n1p1 00:12:15.591 00:12:15.591 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:15.591 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:15.850 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:15.850 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:15.850 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:15.850 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:15.850 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:15.850 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:15.850 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 167473 00:12:15.850 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:15.850 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:15.850 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:15.850 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:15.850 00:12:15.850 real 0m0.470s 00:12:15.850 user 0m0.018s 00:12:15.850 sys 0m0.105s 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:15.850 ************************************ 00:12:15.850 END TEST filesystem_in_capsule_btrfs 00:12:15.850 ************************************ 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.850 ************************************ 00:12:15.850 START TEST filesystem_in_capsule_xfs 00:12:15.850 ************************************ 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:15.850 22:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:16.108 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:16.108 = sectsz=512 attr=2, projid32bit=1 00:12:16.108 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:16.108 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:16.108 data = bsize=4096 blocks=130560, imaxpct=25 00:12:16.108 = sunit=0 swidth=0 blks 00:12:16.108 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:16.108 log =internal log bsize=4096 blocks=16384, version=2 00:12:16.108 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:16.108 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:17.041 Discarding blocks...Done. 00:12:17.041 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:17.041 22:35:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 167473 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:18.940 00:12:18.940 real 0m2.846s 00:12:18.940 user 0m0.010s 00:12:18.940 sys 0m0.067s 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:18.940 ************************************ 00:12:18.940 END TEST filesystem_in_capsule_xfs 00:12:18.940 ************************************ 00:12:18.940 22:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:18.940 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:18.940 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 167473 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 167473 ']' 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 167473 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 167473 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 167473' 00:12:19.199 killing process with pid 167473 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 167473 00:12:19.199 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 167473 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:19.765 00:12:19.765 real 0m17.329s 00:12:19.765 user 1m7.325s 00:12:19.765 sys 0m2.066s 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.765 ************************************ 00:12:19.765 END TEST nvmf_filesystem_in_capsule 00:12:19.765 ************************************ 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:19.765 rmmod nvme_tcp 00:12:19.765 rmmod nvme_fabrics 00:12:19.765 rmmod nvme_keyring 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.765 22:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.675 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:21.675 00:12:21.675 real 0m40.916s 00:12:21.675 user 2m20.818s 00:12:21.675 sys 0m6.306s 00:12:21.675 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.675 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:21.675 ************************************ 00:12:21.675 END TEST nvmf_filesystem 00:12:21.675 ************************************ 00:12:21.935 22:35:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:21.935 22:35:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:21.935 22:35:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.935 22:35:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:21.935 ************************************ 00:12:21.935 START TEST nvmf_target_discovery 00:12:21.935 ************************************ 00:12:21.935 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:21.935 * Looking for test storage... 00:12:21.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.935 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:21.935 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:12:21.935 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:21.935 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:21.935 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.935 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.935 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.935 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.935 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.935 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.935 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:21.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.936 --rc genhtml_branch_coverage=1 00:12:21.936 --rc genhtml_function_coverage=1 00:12:21.936 --rc genhtml_legend=1 00:12:21.936 --rc geninfo_all_blocks=1 00:12:21.936 --rc geninfo_unexecuted_blocks=1 00:12:21.936 00:12:21.936 ' 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:21.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.936 --rc genhtml_branch_coverage=1 00:12:21.936 --rc genhtml_function_coverage=1 00:12:21.936 --rc genhtml_legend=1 00:12:21.936 --rc geninfo_all_blocks=1 00:12:21.936 --rc geninfo_unexecuted_blocks=1 00:12:21.936 00:12:21.936 ' 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:21.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.936 --rc genhtml_branch_coverage=1 00:12:21.936 --rc genhtml_function_coverage=1 00:12:21.936 --rc genhtml_legend=1 00:12:21.936 --rc geninfo_all_blocks=1 00:12:21.936 --rc geninfo_unexecuted_blocks=1 00:12:21.936 00:12:21.936 ' 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:21.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.936 --rc genhtml_branch_coverage=1 00:12:21.936 --rc genhtml_function_coverage=1 00:12:21.936 --rc genhtml_legend=1 00:12:21.936 --rc geninfo_all_blocks=1 00:12:21.936 --rc geninfo_unexecuted_blocks=1 00:12:21.936 00:12:21.936 ' 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:21.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:21.936 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.470 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:24.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:24.471 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:24.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:24.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:24.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:12:24.471 00:12:24.471 --- 10.0.0.2 ping statistics --- 00:12:24.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.471 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:12:24.471 00:12:24.471 --- 10.0.0.1 ping statistics --- 00:12:24.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.471 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=171770 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 171770 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 171770 ']' 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.471 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:24.472 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.472 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:24.472 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.472 [2024-10-11 22:35:27.594893] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:12:24.472 [2024-10-11 22:35:27.595004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.472 [2024-10-11 22:35:27.660391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.472 [2024-10-11 22:35:27.704671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.472 [2024-10-11 22:35:27.704727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.472 [2024-10-11 22:35:27.704747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.472 [2024-10-11 22:35:27.704757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.472 [2024-10-11 22:35:27.704766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.472 [2024-10-11 22:35:27.706174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.472 [2024-10-11 22:35:27.706290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.472 [2024-10-11 22:35:27.706380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.472 [2024-10-11 22:35:27.706384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.730 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 [2024-10-11 22:35:27.847721] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 Null1 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 [2024-10-11 22:35:27.892123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 Null2 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 Null3 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 Null4 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.731 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.990 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.990 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:24.990 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.990 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.990 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.990 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:24.990 00:12:24.990 Discovery Log Number of Records 6, Generation counter 6 00:12:24.990 =====Discovery Log Entry 0====== 00:12:24.990 trtype: tcp 00:12:24.990 adrfam: ipv4 00:12:24.990 subtype: current discovery subsystem 00:12:24.990 treq: not required 00:12:24.990 portid: 0 00:12:24.990 trsvcid: 4420 00:12:24.990 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:24.990 traddr: 10.0.0.2 00:12:24.990 eflags: explicit discovery connections, duplicate discovery information 00:12:24.990 sectype: none 00:12:24.990 =====Discovery Log Entry 1====== 00:12:24.990 trtype: tcp 00:12:24.990 adrfam: ipv4 00:12:24.990 subtype: nvme subsystem 00:12:24.990 treq: not required 00:12:24.990 portid: 0 00:12:24.990 trsvcid: 4420 00:12:24.990 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:24.990 traddr: 10.0.0.2 00:12:24.990 eflags: none 00:12:24.990 sectype: none 00:12:24.990 =====Discovery Log Entry 2====== 00:12:24.990 trtype: tcp 00:12:24.990 adrfam: ipv4 00:12:24.990 subtype: nvme subsystem 00:12:24.990 treq: not required 00:12:24.990 portid: 0 00:12:24.990 trsvcid: 4420 00:12:24.990 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:24.990 traddr: 10.0.0.2 00:12:24.990 eflags: none 00:12:24.990 sectype: none 00:12:24.990 =====Discovery Log Entry 3====== 00:12:24.990 trtype: tcp 00:12:24.990 adrfam: ipv4 00:12:24.990 subtype: nvme subsystem 00:12:24.990 treq: not required 00:12:24.990 portid: 0 00:12:24.990 trsvcid: 4420 00:12:24.990 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:24.990 traddr: 10.0.0.2 00:12:24.990 eflags: none 00:12:24.990 sectype: none 00:12:24.990 =====Discovery Log Entry 4====== 00:12:24.990 trtype: tcp 00:12:24.990 adrfam: ipv4 00:12:24.990 subtype: nvme subsystem 00:12:24.990 treq: not required 00:12:24.990 portid: 0 00:12:24.990 trsvcid: 4420 00:12:24.990 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:24.990 traddr: 10.0.0.2 00:12:24.990 eflags: none 00:12:24.990 sectype: none 00:12:24.990 =====Discovery Log Entry 5====== 00:12:24.990 trtype: tcp 00:12:24.990 adrfam: ipv4 00:12:24.990 subtype: discovery subsystem referral 00:12:24.990 treq: not required 00:12:24.990 portid: 0 00:12:24.990 trsvcid: 4430 00:12:24.990 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:24.990 traddr: 10.0.0.2 00:12:24.990 eflags: none 00:12:24.990 sectype: none 00:12:24.990 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:24.990 Perform nvmf subsystem discovery via RPC 00:12:24.990 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:24.990 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.990 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.990 [ 00:12:24.990 { 00:12:24.990 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:24.990 "subtype": "Discovery", 00:12:24.990 "listen_addresses": [ 00:12:24.990 { 00:12:24.990 "trtype": "TCP", 00:12:24.990 "adrfam": "IPv4", 00:12:24.990 "traddr": "10.0.0.2", 00:12:24.990 "trsvcid": "4420" 00:12:24.990 } 00:12:24.990 ], 00:12:24.990 "allow_any_host": true, 00:12:24.990 "hosts": [] 00:12:24.990 }, 00:12:24.990 { 00:12:24.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.990 "subtype": "NVMe", 00:12:24.990 "listen_addresses": [ 00:12:24.990 { 00:12:24.990 "trtype": "TCP", 00:12:24.990 "adrfam": "IPv4", 00:12:24.990 "traddr": "10.0.0.2", 00:12:24.990 "trsvcid": "4420" 00:12:24.990 } 00:12:24.990 ], 00:12:24.990 "allow_any_host": true, 00:12:24.990 "hosts": [], 00:12:24.990 "serial_number": "SPDK00000000000001", 00:12:24.990 "model_number": "SPDK bdev Controller", 00:12:24.990 "max_namespaces": 32, 00:12:24.990 "min_cntlid": 1, 00:12:24.990 "max_cntlid": 65519, 00:12:24.990 "namespaces": [ 00:12:24.990 { 00:12:24.990 "nsid": 1, 00:12:24.990 "bdev_name": "Null1", 00:12:24.990 "name": "Null1", 00:12:24.990 "nguid": "65692768B77443CFA9E47C990247DA38", 00:12:24.990 "uuid": "65692768-b774-43cf-a9e4-7c990247da38" 00:12:24.990 } 00:12:24.990 ] 00:12:24.990 }, 00:12:24.990 { 00:12:24.990 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:24.990 "subtype": "NVMe", 00:12:24.990 "listen_addresses": [ 00:12:24.990 { 00:12:24.990 "trtype": "TCP", 00:12:24.990 "adrfam": "IPv4", 00:12:24.990 "traddr": "10.0.0.2", 00:12:24.990 "trsvcid": "4420" 00:12:24.990 } 00:12:24.990 ], 00:12:24.990 "allow_any_host": true, 00:12:24.990 "hosts": [], 00:12:24.990 "serial_number": "SPDK00000000000002", 00:12:24.990 "model_number": "SPDK bdev Controller", 00:12:24.990 "max_namespaces": 32, 00:12:24.990 "min_cntlid": 1, 00:12:24.990 "max_cntlid": 65519, 00:12:24.990 "namespaces": [ 00:12:24.990 { 00:12:24.990 "nsid": 1, 00:12:24.990 "bdev_name": "Null2", 00:12:24.990 "name": "Null2", 00:12:24.990 "nguid": "0D2563D4A8A84824BCE186F47E709424", 00:12:24.990 "uuid": "0d2563d4-a8a8-4824-bce1-86f47e709424" 00:12:24.990 } 00:12:24.990 ] 00:12:24.990 }, 00:12:24.990 { 00:12:24.990 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:24.990 "subtype": "NVMe", 00:12:24.990 "listen_addresses": [ 00:12:24.990 { 00:12:24.990 "trtype": "TCP", 00:12:24.990 "adrfam": "IPv4", 00:12:24.990 "traddr": "10.0.0.2", 00:12:24.990 "trsvcid": "4420" 00:12:24.990 } 00:12:24.990 ], 00:12:24.990 "allow_any_host": true, 00:12:24.990 "hosts": [], 00:12:24.990 "serial_number": "SPDK00000000000003", 00:12:24.990 "model_number": "SPDK bdev Controller", 00:12:24.990 "max_namespaces": 32, 00:12:24.990 "min_cntlid": 1, 00:12:24.990 "max_cntlid": 65519, 00:12:24.990 "namespaces": [ 00:12:24.990 { 00:12:24.990 "nsid": 1, 00:12:24.990 "bdev_name": "Null3", 00:12:24.990 "name": "Null3", 00:12:24.990 "nguid": "A550D91EAE34471D8DC9F72CFAC9AA48", 00:12:24.990 "uuid": "a550d91e-ae34-471d-8dc9-f72cfac9aa48" 00:12:24.990 } 00:12:24.990 ] 00:12:24.990 }, 00:12:24.990 { 00:12:24.990 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:24.990 "subtype": "NVMe", 00:12:24.990 "listen_addresses": [ 00:12:24.990 { 00:12:24.990 "trtype": "TCP", 00:12:24.990 "adrfam": "IPv4", 00:12:24.990 "traddr": "10.0.0.2", 00:12:24.990 "trsvcid": "4420" 00:12:24.990 } 00:12:24.990 ], 00:12:24.990 "allow_any_host": true, 00:12:24.990 "hosts": [], 00:12:24.990 "serial_number": "SPDK00000000000004", 00:12:24.990 "model_number": "SPDK bdev Controller", 00:12:24.990 "max_namespaces": 32, 00:12:24.990 "min_cntlid": 1, 00:12:24.990 "max_cntlid": 65519, 00:12:24.990 "namespaces": [ 00:12:24.990 { 00:12:24.990 "nsid": 1, 00:12:24.990 "bdev_name": "Null4", 00:12:24.990 "name": "Null4", 00:12:24.990 "nguid": "F8E19672DAE443598D9407370FC02851", 00:12:24.990 "uuid": "f8e19672-dae4-4359-8d94-07370fc02851" 00:12:24.990 } 00:12:24.990 ] 00:12:24.990 } 00:12:24.990 ] 00:12:24.990 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.990 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:24.990 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:24.991 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.991 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.991 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.991 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.991 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:24.991 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.991 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.991 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.991 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:24.991 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:24.991 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.991 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.249 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:25.250 rmmod nvme_tcp 00:12:25.250 rmmod nvme_fabrics 00:12:25.250 rmmod nvme_keyring 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 171770 ']' 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 171770 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 171770 ']' 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 171770 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 171770 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 171770' 00:12:25.250 killing process with pid 171770 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 171770 00:12:25.250 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 171770 00:12:25.509 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:25.509 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:25.509 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:25.509 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:25.509 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:12:25.509 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:12:25.509 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:25.509 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:25.509 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:25.509 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.509 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.509 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.417 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:27.417 00:12:27.417 real 0m5.682s 00:12:27.417 user 0m4.624s 00:12:27.417 sys 0m2.006s 00:12:27.417 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.417 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.417 ************************************ 00:12:27.417 END TEST nvmf_target_discovery 00:12:27.417 ************************************ 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:27.678 ************************************ 00:12:27.678 START TEST nvmf_referrals 00:12:27.678 ************************************ 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:27.678 * Looking for test storage... 00:12:27.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:27.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.678 --rc genhtml_branch_coverage=1 00:12:27.678 --rc genhtml_function_coverage=1 00:12:27.678 --rc genhtml_legend=1 00:12:27.678 --rc geninfo_all_blocks=1 00:12:27.678 --rc geninfo_unexecuted_blocks=1 00:12:27.678 00:12:27.678 ' 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:27.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.678 --rc genhtml_branch_coverage=1 00:12:27.678 --rc genhtml_function_coverage=1 00:12:27.678 --rc genhtml_legend=1 00:12:27.678 --rc geninfo_all_blocks=1 00:12:27.678 --rc geninfo_unexecuted_blocks=1 00:12:27.678 00:12:27.678 ' 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:27.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.678 --rc genhtml_branch_coverage=1 00:12:27.678 --rc genhtml_function_coverage=1 00:12:27.678 --rc genhtml_legend=1 00:12:27.678 --rc geninfo_all_blocks=1 00:12:27.678 --rc geninfo_unexecuted_blocks=1 00:12:27.678 00:12:27.678 ' 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:27.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.678 --rc genhtml_branch_coverage=1 00:12:27.678 --rc genhtml_function_coverage=1 00:12:27.678 --rc genhtml_legend=1 00:12:27.678 --rc geninfo_all_blocks=1 00:12:27.678 --rc geninfo_unexecuted_blocks=1 00:12:27.678 00:12:27.678 ' 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:27.678 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.679 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.212 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:30.213 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:30.213 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:30.213 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:30.213 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:30.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:12:30.213 00:12:30.213 --- 10.0.0.2 ping statistics --- 00:12:30.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.213 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:12:30.213 00:12:30.213 --- 10.0.0.1 ping statistics --- 00:12:30.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.213 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=173862 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 173862 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 173862 ']' 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:30.213 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.213 [2024-10-11 22:35:33.269216] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:12:30.213 [2024-10-11 22:35:33.269285] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.213 [2024-10-11 22:35:33.331373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.213 [2024-10-11 22:35:33.378122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.213 [2024-10-11 22:35:33.378179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.213 [2024-10-11 22:35:33.378193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.213 [2024-10-11 22:35:33.378204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.213 [2024-10-11 22:35:33.378230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.213 [2024-10-11 22:35:33.379673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.213 [2024-10-11 22:35:33.379735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.213 [2024-10-11 22:35:33.379784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.213 [2024-10-11 22:35:33.379787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.472 [2024-10-11 22:35:33.570748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.472 [2024-10-11 22:35:33.582984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.472 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.730 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.731 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:30.731 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.731 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.731 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.731 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:30.731 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:30.731 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.731 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.731 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.731 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.731 22:35:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.989 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:31.247 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:31.247 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:31.247 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:31.247 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:31.247 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:31.247 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:31.247 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:31.505 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:31.505 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:31.505 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:31.505 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:31.505 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:31.505 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:31.763 22:35:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:31.763 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:31.763 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:31.763 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:31.763 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:31.763 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:31.763 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:31.763 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:32.021 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:32.021 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:32.021 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:32.021 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:32.021 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.021 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:32.279 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.538 rmmod nvme_tcp 00:12:32.538 rmmod nvme_fabrics 00:12:32.538 rmmod nvme_keyring 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 173862 ']' 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 173862 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 173862 ']' 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 173862 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 173862 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 173862' 00:12:32.538 killing process with pid 173862 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 173862 00:12:32.538 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 173862 00:12:32.797 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:32.797 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:32.797 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:32.797 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:32.797 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:12:32.797 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:32.797 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:12:32.797 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.797 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.797 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.797 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.798 22:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.335 22:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.335 00:12:35.335 real 0m7.258s 00:12:35.335 user 0m11.785s 00:12:35.335 sys 0m2.328s 00:12:35.335 22:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:35.335 22:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.335 ************************************ 00:12:35.335 END TEST nvmf_referrals 00:12:35.335 ************************************ 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.335 ************************************ 00:12:35.335 START TEST nvmf_connect_disconnect 00:12:35.335 ************************************ 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:35.335 * Looking for test storage... 00:12:35.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.335 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:35.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.336 --rc genhtml_branch_coverage=1 00:12:35.336 --rc genhtml_function_coverage=1 00:12:35.336 --rc genhtml_legend=1 00:12:35.336 --rc geninfo_all_blocks=1 00:12:35.336 --rc geninfo_unexecuted_blocks=1 00:12:35.336 00:12:35.336 ' 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:35.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.336 --rc genhtml_branch_coverage=1 00:12:35.336 --rc genhtml_function_coverage=1 00:12:35.336 --rc genhtml_legend=1 00:12:35.336 --rc geninfo_all_blocks=1 00:12:35.336 --rc geninfo_unexecuted_blocks=1 00:12:35.336 00:12:35.336 ' 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:35.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.336 --rc genhtml_branch_coverage=1 00:12:35.336 --rc genhtml_function_coverage=1 00:12:35.336 --rc genhtml_legend=1 00:12:35.336 --rc geninfo_all_blocks=1 00:12:35.336 --rc geninfo_unexecuted_blocks=1 00:12:35.336 00:12:35.336 ' 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:35.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.336 --rc genhtml_branch_coverage=1 00:12:35.336 --rc genhtml_function_coverage=1 00:12:35.336 --rc genhtml_legend=1 00:12:35.336 --rc geninfo_all_blocks=1 00:12:35.336 --rc geninfo_unexecuted_blocks=1 00:12:35.336 00:12:35.336 ' 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:35.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:35.336 22:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:37.244 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:37.245 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:37.245 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:37.245 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:37.245 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.245 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.504 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.504 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.504 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:37.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:12:37.505 00:12:37.505 --- 10.0.0.2 ping statistics --- 00:12:37.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.505 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:12:37.505 00:12:37.505 --- 10.0.0.1 ping statistics --- 00:12:37.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.505 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=176176 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 176176 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 176176 ']' 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.505 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.505 [2024-10-11 22:35:40.671092] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:12:37.505 [2024-10-11 22:35:40.671198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.505 [2024-10-11 22:35:40.737524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.764 [2024-10-11 22:35:40.785614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.764 [2024-10-11 22:35:40.785665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.764 [2024-10-11 22:35:40.785688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.764 [2024-10-11 22:35:40.785699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.764 [2024-10-11 22:35:40.785709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.764 [2024-10-11 22:35:40.787342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.764 [2024-10-11 22:35:40.787450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.764 [2024-10-11 22:35:40.787522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.764 [2024-10-11 22:35:40.787526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.764 [2024-10-11 22:35:40.929718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.764 22:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.764 [2024-10-11 22:35:41.001216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.764 22:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.764 22:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:37.764 22:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:37.764 22:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:37.764 22:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:40.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.571 [2024-10-11 22:38:55.580364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4d20 is same with the state(6) to be set 00:15:52.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:29.950 rmmod nvme_tcp 00:16:29.950 rmmod nvme_fabrics 00:16:29.950 rmmod nvme_keyring 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 176176 ']' 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 176176 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 176176 ']' 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 176176 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 176176 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 176176' 00:16:29.950 killing process with pid 176176 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 176176 00:16:29.950 22:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 176176 00:16:29.950 22:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:29.950 22:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:29.950 22:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:29.950 22:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:29.950 22:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:16:29.950 22:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:16:29.950 22:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:29.950 22:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:29.950 22:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:29.950 22:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.950 22:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.950 22:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:32.490 00:16:32.490 real 3m57.181s 00:16:32.490 user 15m2.853s 00:16:32.490 sys 0m35.212s 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:32.490 ************************************ 00:16:32.490 END TEST nvmf_connect_disconnect 00:16:32.490 ************************************ 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:32.490 ************************************ 00:16:32.490 START TEST nvmf_multitarget 00:16:32.490 ************************************ 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:32.490 * Looking for test storage... 00:16:32.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:32.490 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:32.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.491 --rc genhtml_branch_coverage=1 00:16:32.491 --rc genhtml_function_coverage=1 00:16:32.491 --rc genhtml_legend=1 00:16:32.491 --rc geninfo_all_blocks=1 00:16:32.491 --rc geninfo_unexecuted_blocks=1 00:16:32.491 00:16:32.491 ' 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:32.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.491 --rc genhtml_branch_coverage=1 00:16:32.491 --rc genhtml_function_coverage=1 00:16:32.491 --rc genhtml_legend=1 00:16:32.491 --rc geninfo_all_blocks=1 00:16:32.491 --rc geninfo_unexecuted_blocks=1 00:16:32.491 00:16:32.491 ' 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:32.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.491 --rc genhtml_branch_coverage=1 00:16:32.491 --rc genhtml_function_coverage=1 00:16:32.491 --rc genhtml_legend=1 00:16:32.491 --rc geninfo_all_blocks=1 00:16:32.491 --rc geninfo_unexecuted_blocks=1 00:16:32.491 00:16:32.491 ' 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:32.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.491 --rc genhtml_branch_coverage=1 00:16:32.491 --rc genhtml_function_coverage=1 00:16:32.491 --rc genhtml_legend=1 00:16:32.491 --rc geninfo_all_blocks=1 00:16:32.491 --rc geninfo_unexecuted_blocks=1 00:16:32.491 00:16:32.491 ' 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:32.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:32.491 22:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:34.396 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:34.396 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:34.396 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:34.396 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:34.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:16:34.396 00:16:34.396 --- 10.0.0.2 ping statistics --- 00:16:34.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.396 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:16:34.396 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:16:34.655 00:16:34.655 --- 10.0.0.1 ping statistics --- 00:16:34.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.655 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=208148 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 208148 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 208148 ']' 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.655 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.655 [2024-10-11 22:39:37.745806] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:16:34.655 [2024-10-11 22:39:37.745894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.655 [2024-10-11 22:39:37.808822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.655 [2024-10-11 22:39:37.855596] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.655 [2024-10-11 22:39:37.855646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.655 [2024-10-11 22:39:37.855675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.655 [2024-10-11 22:39:37.855686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.655 [2024-10-11 22:39:37.855696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.655 [2024-10-11 22:39:37.857311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.655 [2024-10-11 22:39:37.857378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.655 [2024-10-11 22:39:37.857433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.655 [2024-10-11 22:39:37.857436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.914 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.914 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:34.914 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:34.914 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.914 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.914 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.914 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:34.914 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:34.914 22:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:34.914 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:34.914 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:35.172 "nvmf_tgt_1" 00:16:35.172 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:35.172 "nvmf_tgt_2" 00:16:35.172 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:35.172 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:35.429 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:35.429 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:35.429 true 00:16:35.429 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:35.688 true 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:35.688 rmmod nvme_tcp 00:16:35.688 rmmod nvme_fabrics 00:16:35.688 rmmod nvme_keyring 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 208148 ']' 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 208148 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 208148 ']' 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 208148 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 208148 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 208148' 00:16:35.688 killing process with pid 208148 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 208148 00:16:35.688 22:39:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 208148 00:16:35.948 22:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:35.948 22:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:35.948 22:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:35.948 22:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:35.948 22:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:16:35.948 22:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:35.948 22:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:16:35.948 22:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.948 22:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:35.948 22:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.948 22:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.948 22:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.493 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:38.493 00:16:38.493 real 0m5.933s 00:16:38.493 user 0m6.855s 00:16:38.493 sys 0m2.038s 00:16:38.493 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:38.494 ************************************ 00:16:38.494 END TEST nvmf_multitarget 00:16:38.494 ************************************ 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.494 ************************************ 00:16:38.494 START TEST nvmf_rpc 00:16:38.494 ************************************ 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:38.494 * Looking for test storage... 00:16:38.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:38.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.494 --rc genhtml_branch_coverage=1 00:16:38.494 --rc genhtml_function_coverage=1 00:16:38.494 --rc genhtml_legend=1 00:16:38.494 --rc geninfo_all_blocks=1 00:16:38.494 --rc geninfo_unexecuted_blocks=1 00:16:38.494 00:16:38.494 ' 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:38.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.494 --rc genhtml_branch_coverage=1 00:16:38.494 --rc genhtml_function_coverage=1 00:16:38.494 --rc genhtml_legend=1 00:16:38.494 --rc geninfo_all_blocks=1 00:16:38.494 --rc geninfo_unexecuted_blocks=1 00:16:38.494 00:16:38.494 ' 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:38.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.494 --rc genhtml_branch_coverage=1 00:16:38.494 --rc genhtml_function_coverage=1 00:16:38.494 --rc genhtml_legend=1 00:16:38.494 --rc geninfo_all_blocks=1 00:16:38.494 --rc geninfo_unexecuted_blocks=1 00:16:38.494 00:16:38.494 ' 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:38.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.494 --rc genhtml_branch_coverage=1 00:16:38.494 --rc genhtml_function_coverage=1 00:16:38.494 --rc genhtml_legend=1 00:16:38.494 --rc geninfo_all_blocks=1 00:16:38.494 --rc geninfo_unexecuted_blocks=1 00:16:38.494 00:16:38.494 ' 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:38.494 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:38.495 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.495 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:38.495 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:38.495 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:38.495 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.495 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.495 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.495 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:38.495 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:38.495 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:38.495 22:39:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:40.397 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:40.397 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:40.398 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:40.398 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:40.398 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.398 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:40.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:16:40.657 00:16:40.657 --- 10.0.0.2 ping statistics --- 00:16:40.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.657 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:16:40.657 00:16:40.657 --- 10.0.0.1 ping statistics --- 00:16:40.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.657 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=210254 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 210254 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 210254 ']' 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.657 22:39:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.657 [2024-10-11 22:39:43.840894] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:16:40.657 [2024-10-11 22:39:43.840984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.657 [2024-10-11 22:39:43.911248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.916 [2024-10-11 22:39:43.958834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.916 [2024-10-11 22:39:43.958884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.916 [2024-10-11 22:39:43.958912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.916 [2024-10-11 22:39:43.958923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.916 [2024-10-11 22:39:43.958933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.916 [2024-10-11 22:39:43.960526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.916 [2024-10-11 22:39:43.960655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.916 [2024-10-11 22:39:43.960678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.916 [2024-10-11 22:39:43.960681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:40.916 "tick_rate": 2700000000, 00:16:40.916 "poll_groups": [ 00:16:40.916 { 00:16:40.916 "name": "nvmf_tgt_poll_group_000", 00:16:40.916 "admin_qpairs": 0, 00:16:40.916 "io_qpairs": 0, 00:16:40.916 "current_admin_qpairs": 0, 00:16:40.916 "current_io_qpairs": 0, 00:16:40.916 "pending_bdev_io": 0, 00:16:40.916 "completed_nvme_io": 0, 00:16:40.916 "transports": [] 00:16:40.916 }, 00:16:40.916 { 00:16:40.916 "name": "nvmf_tgt_poll_group_001", 00:16:40.916 "admin_qpairs": 0, 00:16:40.916 "io_qpairs": 0, 00:16:40.916 "current_admin_qpairs": 0, 00:16:40.916 "current_io_qpairs": 0, 00:16:40.916 "pending_bdev_io": 0, 00:16:40.916 "completed_nvme_io": 0, 00:16:40.916 "transports": [] 00:16:40.916 }, 00:16:40.916 { 00:16:40.916 "name": "nvmf_tgt_poll_group_002", 00:16:40.916 "admin_qpairs": 0, 00:16:40.916 "io_qpairs": 0, 00:16:40.916 "current_admin_qpairs": 0, 00:16:40.916 "current_io_qpairs": 0, 00:16:40.916 "pending_bdev_io": 0, 00:16:40.916 "completed_nvme_io": 0, 00:16:40.916 "transports": [] 00:16:40.916 }, 00:16:40.916 { 00:16:40.916 "name": "nvmf_tgt_poll_group_003", 00:16:40.916 "admin_qpairs": 0, 00:16:40.916 "io_qpairs": 0, 00:16:40.916 "current_admin_qpairs": 0, 00:16:40.916 "current_io_qpairs": 0, 00:16:40.916 "pending_bdev_io": 0, 00:16:40.916 "completed_nvme_io": 0, 00:16:40.916 "transports": [] 00:16:40.916 } 00:16:40.916 ] 00:16:40.916 }' 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:40.916 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.175 [2024-10-11 22:39:44.205208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:41.175 "tick_rate": 2700000000, 00:16:41.175 "poll_groups": [ 00:16:41.175 { 00:16:41.175 "name": "nvmf_tgt_poll_group_000", 00:16:41.175 "admin_qpairs": 0, 00:16:41.175 "io_qpairs": 0, 00:16:41.175 "current_admin_qpairs": 0, 00:16:41.175 "current_io_qpairs": 0, 00:16:41.175 "pending_bdev_io": 0, 00:16:41.175 "completed_nvme_io": 0, 00:16:41.175 "transports": [ 00:16:41.175 { 00:16:41.175 "trtype": "TCP" 00:16:41.175 } 00:16:41.175 ] 00:16:41.175 }, 00:16:41.175 { 00:16:41.175 "name": "nvmf_tgt_poll_group_001", 00:16:41.175 "admin_qpairs": 0, 00:16:41.175 "io_qpairs": 0, 00:16:41.175 "current_admin_qpairs": 0, 00:16:41.175 "current_io_qpairs": 0, 00:16:41.175 "pending_bdev_io": 0, 00:16:41.175 "completed_nvme_io": 0, 00:16:41.175 "transports": [ 00:16:41.175 { 00:16:41.175 "trtype": "TCP" 00:16:41.175 } 00:16:41.175 ] 00:16:41.175 }, 00:16:41.175 { 00:16:41.175 "name": "nvmf_tgt_poll_group_002", 00:16:41.175 "admin_qpairs": 0, 00:16:41.175 "io_qpairs": 0, 00:16:41.175 "current_admin_qpairs": 0, 00:16:41.175 "current_io_qpairs": 0, 00:16:41.175 "pending_bdev_io": 0, 00:16:41.175 "completed_nvme_io": 0, 00:16:41.175 "transports": [ 00:16:41.175 { 00:16:41.175 "trtype": "TCP" 00:16:41.175 } 00:16:41.175 ] 00:16:41.175 }, 00:16:41.175 { 00:16:41.175 "name": "nvmf_tgt_poll_group_003", 00:16:41.175 "admin_qpairs": 0, 00:16:41.175 "io_qpairs": 0, 00:16:41.175 "current_admin_qpairs": 0, 00:16:41.175 "current_io_qpairs": 0, 00:16:41.175 "pending_bdev_io": 0, 00:16:41.175 "completed_nvme_io": 0, 00:16:41.175 "transports": [ 00:16:41.175 { 00:16:41.175 "trtype": "TCP" 00:16:41.175 } 00:16:41.175 ] 00:16:41.175 } 00:16:41.175 ] 00:16:41.175 }' 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.175 Malloc1 00:16:41.175 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.176 [2024-10-11 22:39:44.367258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:41.176 [2024-10-11 22:39:44.389977] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:41.176 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:41.176 could not add new controller: failed to write to nvme-fabrics device 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.176 22:39:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:42.109 22:39:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:42.109 22:39:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:42.109 22:39:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:42.109 22:39:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:42.110 22:39:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:44.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:44.009 [2024-10-11 22:39:47.231295] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:44.009 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:44.009 could not add new controller: failed to write to nvme-fabrics device 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.009 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:44.942 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:44.943 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:44.943 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.943 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:44.943 22:39:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:46.840 22:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:46.840 22:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:46.840 22:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.840 22:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:46.840 22:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.840 22:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:46.840 22:39:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.840 [2024-10-11 22:39:50.085574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.840 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.775 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:47.775 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:47.775 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.775 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:47.775 22:39:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:49.673 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.674 [2024-10-11 22:39:52.867304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.674 22:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.607 22:39:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:50.607 22:39:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:50.608 22:39:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:50.608 22:39:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:50.608 22:39:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:52.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.606 [2024-10-11 22:39:55.673412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.606 22:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:53.216 22:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:53.216 22:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:53.216 22:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.216 22:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:53.216 22:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:55.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.182 [2024-10-11 22:39:58.420169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.182 22:39:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:56.168 22:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:56.168 22:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:56.168 22:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:56.168 22:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:56.168 22:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:58.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.155 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.156 [2024-10-11 22:40:01.279703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.156 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:58.765 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:58.765 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:58.765 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.765 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:58.765 22:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:00.803 22:40:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:00.803 22:40:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:00.803 22:40:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.803 22:40:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:00.803 22:40:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.803 22:40:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:00.803 22:40:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:01.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 [2024-10-11 22:40:04.149762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 [2024-10-11 22:40:04.197815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.090 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 [2024-10-11 22:40:04.246003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.091 [2024-10-11 22:40:04.294150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.091 [2024-10-11 22:40:04.342330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.091 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:01.381 "tick_rate": 2700000000, 00:17:01.381 "poll_groups": [ 00:17:01.381 { 00:17:01.381 "name": "nvmf_tgt_poll_group_000", 00:17:01.381 "admin_qpairs": 2, 00:17:01.381 "io_qpairs": 84, 00:17:01.381 "current_admin_qpairs": 0, 00:17:01.381 "current_io_qpairs": 0, 00:17:01.381 "pending_bdev_io": 0, 00:17:01.381 "completed_nvme_io": 184, 00:17:01.381 "transports": [ 00:17:01.381 { 00:17:01.381 "trtype": "TCP" 00:17:01.381 } 00:17:01.381 ] 00:17:01.381 }, 00:17:01.381 { 00:17:01.381 "name": "nvmf_tgt_poll_group_001", 00:17:01.381 "admin_qpairs": 2, 00:17:01.381 "io_qpairs": 84, 00:17:01.381 "current_admin_qpairs": 0, 00:17:01.381 "current_io_qpairs": 0, 00:17:01.381 "pending_bdev_io": 0, 00:17:01.381 "completed_nvme_io": 145, 00:17:01.381 "transports": [ 00:17:01.381 { 00:17:01.381 "trtype": "TCP" 00:17:01.381 } 00:17:01.381 ] 00:17:01.381 }, 00:17:01.381 { 00:17:01.381 "name": "nvmf_tgt_poll_group_002", 00:17:01.381 "admin_qpairs": 1, 00:17:01.381 "io_qpairs": 84, 00:17:01.381 "current_admin_qpairs": 0, 00:17:01.381 "current_io_qpairs": 0, 00:17:01.381 "pending_bdev_io": 0, 00:17:01.381 "completed_nvme_io": 219, 00:17:01.381 "transports": [ 00:17:01.381 { 00:17:01.381 "trtype": "TCP" 00:17:01.381 } 00:17:01.381 ] 00:17:01.381 }, 00:17:01.381 { 00:17:01.381 "name": "nvmf_tgt_poll_group_003", 00:17:01.381 "admin_qpairs": 2, 00:17:01.381 "io_qpairs": 84, 00:17:01.381 "current_admin_qpairs": 0, 00:17:01.381 "current_io_qpairs": 0, 00:17:01.381 "pending_bdev_io": 0, 00:17:01.381 "completed_nvme_io": 138, 00:17:01.381 "transports": [ 00:17:01.381 { 00:17:01.381 "trtype": "TCP" 00:17:01.381 } 00:17:01.381 ] 00:17:01.381 } 00:17:01.381 ] 00:17:01.381 }' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:01.381 rmmod nvme_tcp 00:17:01.381 rmmod nvme_fabrics 00:17:01.381 rmmod nvme_keyring 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 210254 ']' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 210254 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 210254 ']' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 210254 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 210254 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 210254' 00:17:01.381 killing process with pid 210254 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 210254 00:17:01.381 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 210254 00:17:01.653 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:01.653 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:01.653 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:01.653 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:01.653 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:17:01.653 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:01.653 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:17:01.653 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.653 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:01.653 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.653 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.653 22:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.577 22:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:03.577 00:17:03.577 real 0m25.563s 00:17:03.577 user 1m22.579s 00:17:03.577 sys 0m4.373s 00:17:03.577 22:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:03.577 22:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.577 ************************************ 00:17:03.577 END TEST nvmf_rpc 00:17:03.577 ************************************ 00:17:03.577 22:40:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:03.577 22:40:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:03.577 22:40:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:03.577 22:40:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.837 ************************************ 00:17:03.837 START TEST nvmf_invalid 00:17:03.837 ************************************ 00:17:03.837 22:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:03.837 * Looking for test storage... 00:17:03.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.837 22:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:03.837 22:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:17:03.837 22:40:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:03.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.837 --rc genhtml_branch_coverage=1 00:17:03.837 --rc genhtml_function_coverage=1 00:17:03.837 --rc genhtml_legend=1 00:17:03.837 --rc geninfo_all_blocks=1 00:17:03.837 --rc geninfo_unexecuted_blocks=1 00:17:03.837 00:17:03.837 ' 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:03.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.837 --rc genhtml_branch_coverage=1 00:17:03.837 --rc genhtml_function_coverage=1 00:17:03.837 --rc genhtml_legend=1 00:17:03.837 --rc geninfo_all_blocks=1 00:17:03.837 --rc geninfo_unexecuted_blocks=1 00:17:03.837 00:17:03.837 ' 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:03.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.837 --rc genhtml_branch_coverage=1 00:17:03.837 --rc genhtml_function_coverage=1 00:17:03.837 --rc genhtml_legend=1 00:17:03.837 --rc geninfo_all_blocks=1 00:17:03.837 --rc geninfo_unexecuted_blocks=1 00:17:03.837 00:17:03.837 ' 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:03.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.837 --rc genhtml_branch_coverage=1 00:17:03.837 --rc genhtml_function_coverage=1 00:17:03.837 --rc genhtml_legend=1 00:17:03.837 --rc geninfo_all_blocks=1 00:17:03.837 --rc geninfo_unexecuted_blocks=1 00:17:03.837 00:17:03.837 ' 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.837 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:03.838 22:40:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:06.382 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:06.383 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:06.383 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:06.383 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:06.383 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:06.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:17:06.383 00:17:06.383 --- 10.0.0.2 ping statistics --- 00:17:06.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.383 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:17:06.383 00:17:06.383 --- 10.0.0.1 ping statistics --- 00:17:06.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.383 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=214818 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 214818 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 214818 ']' 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.383 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.384 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:06.384 [2024-10-11 22:40:09.433748] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:17:06.384 [2024-10-11 22:40:09.433832] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.384 [2024-10-11 22:40:09.499504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.384 [2024-10-11 22:40:09.548710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.384 [2024-10-11 22:40:09.548765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.384 [2024-10-11 22:40:09.548781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.384 [2024-10-11 22:40:09.548793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.384 [2024-10-11 22:40:09.548804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.384 [2024-10-11 22:40:09.550386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.384 [2024-10-11 22:40:09.550438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.384 [2024-10-11 22:40:09.550420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.384 [2024-10-11 22:40:09.550441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.645 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.645 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:06.645 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:06.645 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:06.645 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:06.645 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.645 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:06.645 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23520 00:17:06.904 [2024-10-11 22:40:09.932960] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:06.904 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:06.904 { 00:17:06.904 "nqn": "nqn.2016-06.io.spdk:cnode23520", 00:17:06.904 "tgt_name": "foobar", 00:17:06.904 "method": "nvmf_create_subsystem", 00:17:06.904 "req_id": 1 00:17:06.904 } 00:17:06.904 Got JSON-RPC error response 00:17:06.904 response: 00:17:06.904 { 00:17:06.904 "code": -32603, 00:17:06.904 "message": "Unable to find target foobar" 00:17:06.904 }' 00:17:06.904 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:06.904 { 00:17:06.904 "nqn": "nqn.2016-06.io.spdk:cnode23520", 00:17:06.904 "tgt_name": "foobar", 00:17:06.904 "method": "nvmf_create_subsystem", 00:17:06.904 "req_id": 1 00:17:06.904 } 00:17:06.904 Got JSON-RPC error response 00:17:06.904 response: 00:17:06.904 { 00:17:06.904 "code": -32603, 00:17:06.904 "message": "Unable to find target foobar" 00:17:06.904 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:06.904 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:06.904 22:40:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6636 00:17:07.162 [2024-10-11 22:40:10.213970] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6636: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:07.162 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:07.162 { 00:17:07.162 "nqn": "nqn.2016-06.io.spdk:cnode6636", 00:17:07.162 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:07.162 "method": "nvmf_create_subsystem", 00:17:07.162 "req_id": 1 00:17:07.162 } 00:17:07.162 Got JSON-RPC error response 00:17:07.162 response: 00:17:07.162 { 00:17:07.162 "code": -32602, 00:17:07.162 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:07.162 }' 00:17:07.162 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:07.162 { 00:17:07.162 "nqn": "nqn.2016-06.io.spdk:cnode6636", 00:17:07.162 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:07.162 "method": "nvmf_create_subsystem", 00:17:07.162 "req_id": 1 00:17:07.162 } 00:17:07.162 Got JSON-RPC error response 00:17:07.162 response: 00:17:07.162 { 00:17:07.162 "code": -32602, 00:17:07.162 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:07.162 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:07.162 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:07.162 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17898 00:17:07.422 [2024-10-11 22:40:10.518939] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17898: invalid model number 'SPDK_Controller' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:07.422 { 00:17:07.422 "nqn": "nqn.2016-06.io.spdk:cnode17898", 00:17:07.422 "model_number": "SPDK_Controller\u001f", 00:17:07.422 "method": "nvmf_create_subsystem", 00:17:07.422 "req_id": 1 00:17:07.422 } 00:17:07.422 Got JSON-RPC error response 00:17:07.422 response: 00:17:07.422 { 00:17:07.422 "code": -32602, 00:17:07.422 "message": "Invalid MN SPDK_Controller\u001f" 00:17:07.422 }' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:07.422 { 00:17:07.422 "nqn": "nqn.2016-06.io.spdk:cnode17898", 00:17:07.422 "model_number": "SPDK_Controller\u001f", 00:17:07.422 "method": "nvmf_create_subsystem", 00:17:07.422 "req_id": 1 00:17:07.422 } 00:17:07.422 Got JSON-RPC error response 00:17:07.422 response: 00:17:07.422 { 00:17:07.422 "code": -32602, 00:17:07.422 "message": "Invalid MN SPDK_Controller\u001f" 00:17:07.422 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.422 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ X == \- ]] 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'XR9Y9iO/\Bs1JE1P9r;@V' 00:17:07.423 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'XR9Y9iO/\Bs1JE1P9r;@V' nqn.2016-06.io.spdk:cnode487 00:17:07.682 [2024-10-11 22:40:10.860077] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode487: invalid serial number 'XR9Y9iO/\Bs1JE1P9r;@V' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:07.682 { 00:17:07.682 "nqn": "nqn.2016-06.io.spdk:cnode487", 00:17:07.682 "serial_number": "XR9Y9iO/\\Bs1JE1P9r;@V", 00:17:07.682 "method": "nvmf_create_subsystem", 00:17:07.682 "req_id": 1 00:17:07.682 } 00:17:07.682 Got JSON-RPC error response 00:17:07.682 response: 00:17:07.682 { 00:17:07.682 "code": -32602, 00:17:07.682 "message": "Invalid SN XR9Y9iO/\\Bs1JE1P9r;@V" 00:17:07.682 }' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:07.682 { 00:17:07.682 "nqn": "nqn.2016-06.io.spdk:cnode487", 00:17:07.682 "serial_number": "XR9Y9iO/\\Bs1JE1P9r;@V", 00:17:07.682 "method": "nvmf_create_subsystem", 00:17:07.682 "req_id": 1 00:17:07.682 } 00:17:07.682 Got JSON-RPC error response 00:17:07.682 response: 00:17:07.682 { 00:17:07.682 "code": -32602, 00:17:07.682 "message": "Invalid SN XR9Y9iO/\\Bs1JE1P9r;@V" 00:17:07.682 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:07.682 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.683 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:07.942 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ k == \- ]] 00:17:07.943 22:40:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'kB'\''C<>#o=hc#o=hc#o=hc#o=hc\u007f#o=hc\u007f#o=hc\u007f#o=hc\u007f /dev/null' 00:17:10.789 22:40:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.700 22:40:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:12.700 00:17:12.700 real 0m9.066s 00:17:12.700 user 0m21.434s 00:17:12.700 sys 0m2.613s 00:17:12.700 22:40:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:12.700 22:40:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.700 ************************************ 00:17:12.700 END TEST nvmf_invalid 00:17:12.700 ************************************ 00:17:12.700 22:40:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:12.700 22:40:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:12.700 22:40:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:12.700 22:40:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.959 ************************************ 00:17:12.959 START TEST nvmf_connect_stress 00:17:12.959 ************************************ 00:17:12.959 22:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:12.959 * Looking for test storage... 00:17:12.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:12.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.959 --rc genhtml_branch_coverage=1 00:17:12.959 --rc genhtml_function_coverage=1 00:17:12.959 --rc genhtml_legend=1 00:17:12.959 --rc geninfo_all_blocks=1 00:17:12.959 --rc geninfo_unexecuted_blocks=1 00:17:12.959 00:17:12.959 ' 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:12.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.959 --rc genhtml_branch_coverage=1 00:17:12.959 --rc genhtml_function_coverage=1 00:17:12.959 --rc genhtml_legend=1 00:17:12.959 --rc geninfo_all_blocks=1 00:17:12.959 --rc geninfo_unexecuted_blocks=1 00:17:12.959 00:17:12.959 ' 00:17:12.959 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:12.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.959 --rc genhtml_branch_coverage=1 00:17:12.959 --rc genhtml_function_coverage=1 00:17:12.959 --rc genhtml_legend=1 00:17:12.959 --rc geninfo_all_blocks=1 00:17:12.960 --rc geninfo_unexecuted_blocks=1 00:17:12.960 00:17:12.960 ' 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:12.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.960 --rc genhtml_branch_coverage=1 00:17:12.960 --rc genhtml_function_coverage=1 00:17:12.960 --rc genhtml_legend=1 00:17:12.960 --rc geninfo_all_blocks=1 00:17:12.960 --rc geninfo_unexecuted_blocks=1 00:17:12.960 00:17:12.960 ' 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:12.960 22:40:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:15.497 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:15.497 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:15.497 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:15.497 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.497 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:15.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:17:15.498 00:17:15.498 --- 10.0.0.2 ping statistics --- 00:17:15.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.498 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:17:15.498 00:17:15.498 --- 10.0.0.1 ping statistics --- 00:17:15.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.498 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=217452 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 217452 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 217452 ']' 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.498 [2024-10-11 22:40:18.384009] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:17:15.498 [2024-10-11 22:40:18.384088] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.498 [2024-10-11 22:40:18.449786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:15.498 [2024-10-11 22:40:18.497924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.498 [2024-10-11 22:40:18.497975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.498 [2024-10-11 22:40:18.497990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.498 [2024-10-11 22:40:18.498000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.498 [2024-10-11 22:40:18.498009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.498 [2024-10-11 22:40:18.499478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.498 [2024-10-11 22:40:18.499547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:15.498 [2024-10-11 22:40:18.499575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.498 [2024-10-11 22:40:18.645626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.498 [2024-10-11 22:40:18.662845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.498 NULL1 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=217601 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.498 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.499 22:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.066 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.066 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:16.066 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.066 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.066 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.325 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.325 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:16.325 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.325 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.325 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.583 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.584 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:16.584 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.584 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.584 22:40:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.842 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.842 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:16.842 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.842 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.842 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.101 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.101 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:17.101 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.101 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.101 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.669 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.669 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:17.669 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.669 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.669 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.928 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.928 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:17.928 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.928 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.928 22:40:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.186 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.186 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:18.187 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.187 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.187 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.445 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.445 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:18.445 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.445 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.445 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.704 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.704 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:18.704 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.704 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.704 22:40:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.271 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.271 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:19.271 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.271 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.271 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.529 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.529 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:19.529 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.529 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.529 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.788 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.788 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:19.788 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.788 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.788 22:40:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.047 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.047 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:20.047 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.047 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.047 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.306 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.306 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:20.306 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.306 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.306 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.873 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.873 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:20.873 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.873 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.873 22:40:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.132 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.132 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:21.132 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.132 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.132 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.390 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.390 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:21.390 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.390 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.390 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:21.649 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.649 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 22:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.908 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.908 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:21.908 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.908 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.908 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.475 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.475 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:22.475 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.475 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.475 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.734 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.734 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:22.734 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.734 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.734 22:40:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.993 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.993 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:22.993 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.993 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.993 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.252 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.252 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:23.252 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.252 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.252 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.510 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.510 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:23.510 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.510 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.510 22:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.077 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.077 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:24.077 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.077 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.077 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.336 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.336 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:24.336 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.336 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.336 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.594 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.594 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:24.594 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.594 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.594 22:40:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.852 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.852 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:24.852 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.852 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.852 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.111 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.111 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:25.111 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.111 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.111 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.679 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.679 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:25.679 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.679 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.679 22:40:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.679 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217601 00:17:25.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (217601) - No such process 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 217601 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:25.938 rmmod nvme_tcp 00:17:25.938 rmmod nvme_fabrics 00:17:25.938 rmmod nvme_keyring 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 217452 ']' 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 217452 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 217452 ']' 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 217452 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 217452 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 217452' 00:17:25.938 killing process with pid 217452 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 217452 00:17:25.938 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 217452 00:17:26.199 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:26.199 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:26.199 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:26.199 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:26.199 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:17:26.199 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:26.199 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:17:26.199 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:26.199 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:26.199 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.199 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.199 22:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.111 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:28.111 00:17:28.111 real 0m15.377s 00:17:28.111 user 0m40.092s 00:17:28.111 sys 0m4.529s 00:17:28.111 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:28.111 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.111 ************************************ 00:17:28.111 END TEST nvmf_connect_stress 00:17:28.111 ************************************ 00:17:28.371 22:40:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:28.371 22:40:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:28.371 22:40:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:28.371 22:40:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.371 ************************************ 00:17:28.371 START TEST nvmf_fused_ordering 00:17:28.371 ************************************ 00:17:28.371 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:28.371 * Looking for test storage... 00:17:28.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.371 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:28.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.372 --rc genhtml_branch_coverage=1 00:17:28.372 --rc genhtml_function_coverage=1 00:17:28.372 --rc genhtml_legend=1 00:17:28.372 --rc geninfo_all_blocks=1 00:17:28.372 --rc geninfo_unexecuted_blocks=1 00:17:28.372 00:17:28.372 ' 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:28.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.372 --rc genhtml_branch_coverage=1 00:17:28.372 --rc genhtml_function_coverage=1 00:17:28.372 --rc genhtml_legend=1 00:17:28.372 --rc geninfo_all_blocks=1 00:17:28.372 --rc geninfo_unexecuted_blocks=1 00:17:28.372 00:17:28.372 ' 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:28.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.372 --rc genhtml_branch_coverage=1 00:17:28.372 --rc genhtml_function_coverage=1 00:17:28.372 --rc genhtml_legend=1 00:17:28.372 --rc geninfo_all_blocks=1 00:17:28.372 --rc geninfo_unexecuted_blocks=1 00:17:28.372 00:17:28.372 ' 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:28.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.372 --rc genhtml_branch_coverage=1 00:17:28.372 --rc genhtml_function_coverage=1 00:17:28.372 --rc genhtml_legend=1 00:17:28.372 --rc geninfo_all_blocks=1 00:17:28.372 --rc geninfo_unexecuted_blocks=1 00:17:28.372 00:17:28.372 ' 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.372 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.373 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.373 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:28.373 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:28.373 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:28.373 22:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.907 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.907 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:30.907 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:30.907 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:30.907 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:30.907 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:30.908 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:30.908 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:30.908 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:30.908 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:30.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:17:30.908 00:17:30.908 --- 10.0.0.2 ping statistics --- 00:17:30.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.908 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:17:30.908 00:17:30.908 --- 10.0.0.1 ping statistics --- 00:17:30.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.908 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:30.908 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:30.909 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:30.909 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:30.909 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.909 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=220751 00:17:30.909 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.909 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 220751 00:17:30.909 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 220751 ']' 00:17:30.909 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.909 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.909 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.909 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.909 22:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.909 [2024-10-11 22:40:34.029601] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:17:30.909 [2024-10-11 22:40:34.029682] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.909 [2024-10-11 22:40:34.094509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.909 [2024-10-11 22:40:34.143124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.909 [2024-10-11 22:40:34.143194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.909 [2024-10-11 22:40:34.143207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.909 [2024-10-11 22:40:34.143218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.909 [2024-10-11 22:40:34.143227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.909 [2024-10-11 22:40:34.143847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:31.168 [2024-10-11 22:40:34.289512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:31.168 [2024-10-11 22:40:34.305748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:31.168 NULL1 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.168 22:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:31.168 [2024-10-11 22:40:34.349172] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:17:31.168 [2024-10-11 22:40:34.349205] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220792 ] 00:17:31.735 Attached to nqn.2016-06.io.spdk:cnode1 00:17:31.735 Namespace ID: 1 size: 1GB 00:17:31.735 fused_ordering(0) 00:17:31.735 fused_ordering(1) 00:17:31.735 fused_ordering(2) 00:17:31.735 fused_ordering(3) 00:17:31.735 fused_ordering(4) 00:17:31.735 fused_ordering(5) 00:17:31.735 fused_ordering(6) 00:17:31.735 fused_ordering(7) 00:17:31.735 fused_ordering(8) 00:17:31.735 fused_ordering(9) 00:17:31.735 fused_ordering(10) 00:17:31.735 fused_ordering(11) 00:17:31.735 fused_ordering(12) 00:17:31.735 fused_ordering(13) 00:17:31.735 fused_ordering(14) 00:17:31.735 fused_ordering(15) 00:17:31.735 fused_ordering(16) 00:17:31.735 fused_ordering(17) 00:17:31.735 fused_ordering(18) 00:17:31.735 fused_ordering(19) 00:17:31.735 fused_ordering(20) 00:17:31.735 fused_ordering(21) 00:17:31.735 fused_ordering(22) 00:17:31.735 fused_ordering(23) 00:17:31.735 fused_ordering(24) 00:17:31.735 fused_ordering(25) 00:17:31.735 fused_ordering(26) 00:17:31.735 fused_ordering(27) 00:17:31.735 fused_ordering(28) 00:17:31.735 fused_ordering(29) 00:17:31.735 fused_ordering(30) 00:17:31.735 fused_ordering(31) 00:17:31.735 fused_ordering(32) 00:17:31.735 fused_ordering(33) 00:17:31.735 fused_ordering(34) 00:17:31.735 fused_ordering(35) 00:17:31.735 fused_ordering(36) 00:17:31.735 fused_ordering(37) 00:17:31.735 fused_ordering(38) 00:17:31.735 fused_ordering(39) 00:17:31.735 fused_ordering(40) 00:17:31.735 fused_ordering(41) 00:17:31.735 fused_ordering(42) 00:17:31.735 fused_ordering(43) 00:17:31.735 fused_ordering(44) 00:17:31.735 fused_ordering(45) 00:17:31.735 fused_ordering(46) 00:17:31.735 fused_ordering(47) 00:17:31.735 fused_ordering(48) 00:17:31.735 fused_ordering(49) 00:17:31.735 fused_ordering(50) 00:17:31.735 fused_ordering(51) 00:17:31.735 fused_ordering(52) 00:17:31.735 fused_ordering(53) 00:17:31.735 fused_ordering(54) 00:17:31.735 fused_ordering(55) 00:17:31.735 fused_ordering(56) 00:17:31.735 fused_ordering(57) 00:17:31.735 fused_ordering(58) 00:17:31.735 fused_ordering(59) 00:17:31.735 fused_ordering(60) 00:17:31.735 fused_ordering(61) 00:17:31.735 fused_ordering(62) 00:17:31.735 fused_ordering(63) 00:17:31.735 fused_ordering(64) 00:17:31.735 fused_ordering(65) 00:17:31.735 fused_ordering(66) 00:17:31.735 fused_ordering(67) 00:17:31.735 fused_ordering(68) 00:17:31.735 fused_ordering(69) 00:17:31.735 fused_ordering(70) 00:17:31.735 fused_ordering(71) 00:17:31.735 fused_ordering(72) 00:17:31.735 fused_ordering(73) 00:17:31.735 fused_ordering(74) 00:17:31.735 fused_ordering(75) 00:17:31.735 fused_ordering(76) 00:17:31.735 fused_ordering(77) 00:17:31.735 fused_ordering(78) 00:17:31.735 fused_ordering(79) 00:17:31.735 fused_ordering(80) 00:17:31.735 fused_ordering(81) 00:17:31.735 fused_ordering(82) 00:17:31.735 fused_ordering(83) 00:17:31.735 fused_ordering(84) 00:17:31.735 fused_ordering(85) 00:17:31.735 fused_ordering(86) 00:17:31.735 fused_ordering(87) 00:17:31.735 fused_ordering(88) 00:17:31.735 fused_ordering(89) 00:17:31.735 fused_ordering(90) 00:17:31.735 fused_ordering(91) 00:17:31.735 fused_ordering(92) 00:17:31.735 fused_ordering(93) 00:17:31.735 fused_ordering(94) 00:17:31.735 fused_ordering(95) 00:17:31.735 fused_ordering(96) 00:17:31.735 fused_ordering(97) 00:17:31.735 fused_ordering(98) 00:17:31.735 fused_ordering(99) 00:17:31.735 fused_ordering(100) 00:17:31.735 fused_ordering(101) 00:17:31.735 fused_ordering(102) 00:17:31.735 fused_ordering(103) 00:17:31.735 fused_ordering(104) 00:17:31.735 fused_ordering(105) 00:17:31.735 fused_ordering(106) 00:17:31.735 fused_ordering(107) 00:17:31.735 fused_ordering(108) 00:17:31.735 fused_ordering(109) 00:17:31.735 fused_ordering(110) 00:17:31.735 fused_ordering(111) 00:17:31.735 fused_ordering(112) 00:17:31.735 fused_ordering(113) 00:17:31.735 fused_ordering(114) 00:17:31.735 fused_ordering(115) 00:17:31.735 fused_ordering(116) 00:17:31.735 fused_ordering(117) 00:17:31.735 fused_ordering(118) 00:17:31.735 fused_ordering(119) 00:17:31.735 fused_ordering(120) 00:17:31.735 fused_ordering(121) 00:17:31.735 fused_ordering(122) 00:17:31.735 fused_ordering(123) 00:17:31.735 fused_ordering(124) 00:17:31.735 fused_ordering(125) 00:17:31.735 fused_ordering(126) 00:17:31.735 fused_ordering(127) 00:17:31.735 fused_ordering(128) 00:17:31.735 fused_ordering(129) 00:17:31.735 fused_ordering(130) 00:17:31.735 fused_ordering(131) 00:17:31.735 fused_ordering(132) 00:17:31.735 fused_ordering(133) 00:17:31.735 fused_ordering(134) 00:17:31.735 fused_ordering(135) 00:17:31.735 fused_ordering(136) 00:17:31.735 fused_ordering(137) 00:17:31.735 fused_ordering(138) 00:17:31.735 fused_ordering(139) 00:17:31.735 fused_ordering(140) 00:17:31.735 fused_ordering(141) 00:17:31.735 fused_ordering(142) 00:17:31.735 fused_ordering(143) 00:17:31.735 fused_ordering(144) 00:17:31.735 fused_ordering(145) 00:17:31.735 fused_ordering(146) 00:17:31.735 fused_ordering(147) 00:17:31.735 fused_ordering(148) 00:17:31.735 fused_ordering(149) 00:17:31.735 fused_ordering(150) 00:17:31.735 fused_ordering(151) 00:17:31.735 fused_ordering(152) 00:17:31.735 fused_ordering(153) 00:17:31.735 fused_ordering(154) 00:17:31.735 fused_ordering(155) 00:17:31.735 fused_ordering(156) 00:17:31.735 fused_ordering(157) 00:17:31.735 fused_ordering(158) 00:17:31.735 fused_ordering(159) 00:17:31.735 fused_ordering(160) 00:17:31.735 fused_ordering(161) 00:17:31.735 fused_ordering(162) 00:17:31.735 fused_ordering(163) 00:17:31.735 fused_ordering(164) 00:17:31.735 fused_ordering(165) 00:17:31.735 fused_ordering(166) 00:17:31.735 fused_ordering(167) 00:17:31.735 fused_ordering(168) 00:17:31.735 fused_ordering(169) 00:17:31.735 fused_ordering(170) 00:17:31.735 fused_ordering(171) 00:17:31.735 fused_ordering(172) 00:17:31.735 fused_ordering(173) 00:17:31.735 fused_ordering(174) 00:17:31.735 fused_ordering(175) 00:17:31.735 fused_ordering(176) 00:17:31.735 fused_ordering(177) 00:17:31.735 fused_ordering(178) 00:17:31.735 fused_ordering(179) 00:17:31.735 fused_ordering(180) 00:17:31.735 fused_ordering(181) 00:17:31.735 fused_ordering(182) 00:17:31.735 fused_ordering(183) 00:17:31.735 fused_ordering(184) 00:17:31.735 fused_ordering(185) 00:17:31.735 fused_ordering(186) 00:17:31.735 fused_ordering(187) 00:17:31.735 fused_ordering(188) 00:17:31.735 fused_ordering(189) 00:17:31.735 fused_ordering(190) 00:17:31.735 fused_ordering(191) 00:17:31.735 fused_ordering(192) 00:17:31.735 fused_ordering(193) 00:17:31.735 fused_ordering(194) 00:17:31.735 fused_ordering(195) 00:17:31.735 fused_ordering(196) 00:17:31.735 fused_ordering(197) 00:17:31.735 fused_ordering(198) 00:17:31.735 fused_ordering(199) 00:17:31.735 fused_ordering(200) 00:17:31.735 fused_ordering(201) 00:17:31.735 fused_ordering(202) 00:17:31.735 fused_ordering(203) 00:17:31.735 fused_ordering(204) 00:17:31.735 fused_ordering(205) 00:17:31.994 fused_ordering(206) 00:17:31.994 fused_ordering(207) 00:17:31.994 fused_ordering(208) 00:17:31.994 fused_ordering(209) 00:17:31.994 fused_ordering(210) 00:17:31.994 fused_ordering(211) 00:17:31.994 fused_ordering(212) 00:17:31.994 fused_ordering(213) 00:17:31.994 fused_ordering(214) 00:17:31.994 fused_ordering(215) 00:17:31.994 fused_ordering(216) 00:17:31.994 fused_ordering(217) 00:17:31.994 fused_ordering(218) 00:17:31.994 fused_ordering(219) 00:17:31.994 fused_ordering(220) 00:17:31.994 fused_ordering(221) 00:17:31.994 fused_ordering(222) 00:17:31.994 fused_ordering(223) 00:17:31.994 fused_ordering(224) 00:17:31.994 fused_ordering(225) 00:17:31.994 fused_ordering(226) 00:17:31.994 fused_ordering(227) 00:17:31.994 fused_ordering(228) 00:17:31.994 fused_ordering(229) 00:17:31.994 fused_ordering(230) 00:17:31.994 fused_ordering(231) 00:17:31.994 fused_ordering(232) 00:17:31.994 fused_ordering(233) 00:17:31.994 fused_ordering(234) 00:17:31.994 fused_ordering(235) 00:17:31.994 fused_ordering(236) 00:17:31.994 fused_ordering(237) 00:17:31.994 fused_ordering(238) 00:17:31.994 fused_ordering(239) 00:17:31.994 fused_ordering(240) 00:17:31.994 fused_ordering(241) 00:17:31.994 fused_ordering(242) 00:17:31.994 fused_ordering(243) 00:17:31.994 fused_ordering(244) 00:17:31.994 fused_ordering(245) 00:17:31.994 fused_ordering(246) 00:17:31.994 fused_ordering(247) 00:17:31.994 fused_ordering(248) 00:17:31.994 fused_ordering(249) 00:17:31.994 fused_ordering(250) 00:17:31.994 fused_ordering(251) 00:17:31.994 fused_ordering(252) 00:17:31.994 fused_ordering(253) 00:17:31.994 fused_ordering(254) 00:17:31.994 fused_ordering(255) 00:17:31.994 fused_ordering(256) 00:17:31.994 fused_ordering(257) 00:17:31.994 fused_ordering(258) 00:17:31.994 fused_ordering(259) 00:17:31.994 fused_ordering(260) 00:17:31.994 fused_ordering(261) 00:17:31.994 fused_ordering(262) 00:17:31.994 fused_ordering(263) 00:17:31.994 fused_ordering(264) 00:17:31.994 fused_ordering(265) 00:17:31.994 fused_ordering(266) 00:17:31.994 fused_ordering(267) 00:17:31.994 fused_ordering(268) 00:17:31.994 fused_ordering(269) 00:17:31.994 fused_ordering(270) 00:17:31.994 fused_ordering(271) 00:17:31.994 fused_ordering(272) 00:17:31.994 fused_ordering(273) 00:17:31.994 fused_ordering(274) 00:17:31.994 fused_ordering(275) 00:17:31.994 fused_ordering(276) 00:17:31.994 fused_ordering(277) 00:17:31.994 fused_ordering(278) 00:17:31.994 fused_ordering(279) 00:17:31.994 fused_ordering(280) 00:17:31.994 fused_ordering(281) 00:17:31.994 fused_ordering(282) 00:17:31.994 fused_ordering(283) 00:17:31.994 fused_ordering(284) 00:17:31.994 fused_ordering(285) 00:17:31.994 fused_ordering(286) 00:17:31.994 fused_ordering(287) 00:17:31.994 fused_ordering(288) 00:17:31.994 fused_ordering(289) 00:17:31.994 fused_ordering(290) 00:17:31.994 fused_ordering(291) 00:17:31.994 fused_ordering(292) 00:17:31.994 fused_ordering(293) 00:17:31.994 fused_ordering(294) 00:17:31.994 fused_ordering(295) 00:17:31.994 fused_ordering(296) 00:17:31.994 fused_ordering(297) 00:17:31.994 fused_ordering(298) 00:17:31.994 fused_ordering(299) 00:17:31.994 fused_ordering(300) 00:17:31.994 fused_ordering(301) 00:17:31.994 fused_ordering(302) 00:17:31.994 fused_ordering(303) 00:17:31.994 fused_ordering(304) 00:17:31.994 fused_ordering(305) 00:17:31.994 fused_ordering(306) 00:17:31.994 fused_ordering(307) 00:17:31.994 fused_ordering(308) 00:17:31.994 fused_ordering(309) 00:17:31.994 fused_ordering(310) 00:17:31.994 fused_ordering(311) 00:17:31.994 fused_ordering(312) 00:17:31.994 fused_ordering(313) 00:17:31.994 fused_ordering(314) 00:17:31.994 fused_ordering(315) 00:17:31.994 fused_ordering(316) 00:17:31.994 fused_ordering(317) 00:17:31.994 fused_ordering(318) 00:17:31.994 fused_ordering(319) 00:17:31.994 fused_ordering(320) 00:17:31.994 fused_ordering(321) 00:17:31.994 fused_ordering(322) 00:17:31.994 fused_ordering(323) 00:17:31.994 fused_ordering(324) 00:17:31.994 fused_ordering(325) 00:17:31.994 fused_ordering(326) 00:17:31.994 fused_ordering(327) 00:17:31.994 fused_ordering(328) 00:17:31.994 fused_ordering(329) 00:17:31.994 fused_ordering(330) 00:17:31.994 fused_ordering(331) 00:17:31.994 fused_ordering(332) 00:17:31.994 fused_ordering(333) 00:17:31.994 fused_ordering(334) 00:17:31.994 fused_ordering(335) 00:17:31.994 fused_ordering(336) 00:17:31.994 fused_ordering(337) 00:17:31.994 fused_ordering(338) 00:17:31.994 fused_ordering(339) 00:17:31.994 fused_ordering(340) 00:17:31.994 fused_ordering(341) 00:17:31.994 fused_ordering(342) 00:17:31.994 fused_ordering(343) 00:17:31.994 fused_ordering(344) 00:17:31.994 fused_ordering(345) 00:17:31.994 fused_ordering(346) 00:17:31.994 fused_ordering(347) 00:17:31.994 fused_ordering(348) 00:17:31.994 fused_ordering(349) 00:17:31.994 fused_ordering(350) 00:17:31.994 fused_ordering(351) 00:17:31.994 fused_ordering(352) 00:17:31.994 fused_ordering(353) 00:17:31.994 fused_ordering(354) 00:17:31.994 fused_ordering(355) 00:17:31.994 fused_ordering(356) 00:17:31.994 fused_ordering(357) 00:17:31.994 fused_ordering(358) 00:17:31.994 fused_ordering(359) 00:17:31.994 fused_ordering(360) 00:17:31.994 fused_ordering(361) 00:17:31.994 fused_ordering(362) 00:17:31.994 fused_ordering(363) 00:17:31.994 fused_ordering(364) 00:17:31.994 fused_ordering(365) 00:17:31.994 fused_ordering(366) 00:17:31.994 fused_ordering(367) 00:17:31.994 fused_ordering(368) 00:17:31.994 fused_ordering(369) 00:17:31.994 fused_ordering(370) 00:17:31.994 fused_ordering(371) 00:17:31.994 fused_ordering(372) 00:17:31.994 fused_ordering(373) 00:17:31.994 fused_ordering(374) 00:17:31.994 fused_ordering(375) 00:17:31.994 fused_ordering(376) 00:17:31.994 fused_ordering(377) 00:17:31.994 fused_ordering(378) 00:17:31.994 fused_ordering(379) 00:17:31.994 fused_ordering(380) 00:17:31.994 fused_ordering(381) 00:17:31.994 fused_ordering(382) 00:17:31.994 fused_ordering(383) 00:17:31.994 fused_ordering(384) 00:17:31.994 fused_ordering(385) 00:17:31.994 fused_ordering(386) 00:17:31.994 fused_ordering(387) 00:17:31.995 fused_ordering(388) 00:17:31.995 fused_ordering(389) 00:17:31.995 fused_ordering(390) 00:17:31.995 fused_ordering(391) 00:17:31.995 fused_ordering(392) 00:17:31.995 fused_ordering(393) 00:17:31.995 fused_ordering(394) 00:17:31.995 fused_ordering(395) 00:17:31.995 fused_ordering(396) 00:17:31.995 fused_ordering(397) 00:17:31.995 fused_ordering(398) 00:17:31.995 fused_ordering(399) 00:17:31.995 fused_ordering(400) 00:17:31.995 fused_ordering(401) 00:17:31.995 fused_ordering(402) 00:17:31.995 fused_ordering(403) 00:17:31.995 fused_ordering(404) 00:17:31.995 fused_ordering(405) 00:17:31.995 fused_ordering(406) 00:17:31.995 fused_ordering(407) 00:17:31.995 fused_ordering(408) 00:17:31.995 fused_ordering(409) 00:17:31.995 fused_ordering(410) 00:17:32.252 fused_ordering(411) 00:17:32.252 fused_ordering(412) 00:17:32.252 fused_ordering(413) 00:17:32.252 fused_ordering(414) 00:17:32.252 fused_ordering(415) 00:17:32.252 fused_ordering(416) 00:17:32.252 fused_ordering(417) 00:17:32.252 fused_ordering(418) 00:17:32.252 fused_ordering(419) 00:17:32.252 fused_ordering(420) 00:17:32.252 fused_ordering(421) 00:17:32.252 fused_ordering(422) 00:17:32.252 fused_ordering(423) 00:17:32.252 fused_ordering(424) 00:17:32.252 fused_ordering(425) 00:17:32.252 fused_ordering(426) 00:17:32.252 fused_ordering(427) 00:17:32.252 fused_ordering(428) 00:17:32.252 fused_ordering(429) 00:17:32.252 fused_ordering(430) 00:17:32.252 fused_ordering(431) 00:17:32.252 fused_ordering(432) 00:17:32.252 fused_ordering(433) 00:17:32.252 fused_ordering(434) 00:17:32.252 fused_ordering(435) 00:17:32.253 fused_ordering(436) 00:17:32.253 fused_ordering(437) 00:17:32.253 fused_ordering(438) 00:17:32.253 fused_ordering(439) 00:17:32.253 fused_ordering(440) 00:17:32.253 fused_ordering(441) 00:17:32.253 fused_ordering(442) 00:17:32.253 fused_ordering(443) 00:17:32.253 fused_ordering(444) 00:17:32.253 fused_ordering(445) 00:17:32.253 fused_ordering(446) 00:17:32.253 fused_ordering(447) 00:17:32.253 fused_ordering(448) 00:17:32.253 fused_ordering(449) 00:17:32.253 fused_ordering(450) 00:17:32.253 fused_ordering(451) 00:17:32.253 fused_ordering(452) 00:17:32.253 fused_ordering(453) 00:17:32.253 fused_ordering(454) 00:17:32.253 fused_ordering(455) 00:17:32.253 fused_ordering(456) 00:17:32.253 fused_ordering(457) 00:17:32.253 fused_ordering(458) 00:17:32.253 fused_ordering(459) 00:17:32.253 fused_ordering(460) 00:17:32.253 fused_ordering(461) 00:17:32.253 fused_ordering(462) 00:17:32.253 fused_ordering(463) 00:17:32.253 fused_ordering(464) 00:17:32.253 fused_ordering(465) 00:17:32.253 fused_ordering(466) 00:17:32.253 fused_ordering(467) 00:17:32.253 fused_ordering(468) 00:17:32.253 fused_ordering(469) 00:17:32.253 fused_ordering(470) 00:17:32.253 fused_ordering(471) 00:17:32.253 fused_ordering(472) 00:17:32.253 fused_ordering(473) 00:17:32.253 fused_ordering(474) 00:17:32.253 fused_ordering(475) 00:17:32.253 fused_ordering(476) 00:17:32.253 fused_ordering(477) 00:17:32.253 fused_ordering(478) 00:17:32.253 fused_ordering(479) 00:17:32.253 fused_ordering(480) 00:17:32.253 fused_ordering(481) 00:17:32.253 fused_ordering(482) 00:17:32.253 fused_ordering(483) 00:17:32.253 fused_ordering(484) 00:17:32.253 fused_ordering(485) 00:17:32.253 fused_ordering(486) 00:17:32.253 fused_ordering(487) 00:17:32.253 fused_ordering(488) 00:17:32.253 fused_ordering(489) 00:17:32.253 fused_ordering(490) 00:17:32.253 fused_ordering(491) 00:17:32.253 fused_ordering(492) 00:17:32.253 fused_ordering(493) 00:17:32.253 fused_ordering(494) 00:17:32.253 fused_ordering(495) 00:17:32.253 fused_ordering(496) 00:17:32.253 fused_ordering(497) 00:17:32.253 fused_ordering(498) 00:17:32.253 fused_ordering(499) 00:17:32.253 fused_ordering(500) 00:17:32.253 fused_ordering(501) 00:17:32.253 fused_ordering(502) 00:17:32.253 fused_ordering(503) 00:17:32.253 fused_ordering(504) 00:17:32.253 fused_ordering(505) 00:17:32.253 fused_ordering(506) 00:17:32.253 fused_ordering(507) 00:17:32.253 fused_ordering(508) 00:17:32.253 fused_ordering(509) 00:17:32.253 fused_ordering(510) 00:17:32.253 fused_ordering(511) 00:17:32.253 fused_ordering(512) 00:17:32.253 fused_ordering(513) 00:17:32.253 fused_ordering(514) 00:17:32.253 fused_ordering(515) 00:17:32.253 fused_ordering(516) 00:17:32.253 fused_ordering(517) 00:17:32.253 fused_ordering(518) 00:17:32.253 fused_ordering(519) 00:17:32.253 fused_ordering(520) 00:17:32.253 fused_ordering(521) 00:17:32.253 fused_ordering(522) 00:17:32.253 fused_ordering(523) 00:17:32.253 fused_ordering(524) 00:17:32.253 fused_ordering(525) 00:17:32.253 fused_ordering(526) 00:17:32.253 fused_ordering(527) 00:17:32.253 fused_ordering(528) 00:17:32.253 fused_ordering(529) 00:17:32.253 fused_ordering(530) 00:17:32.253 fused_ordering(531) 00:17:32.253 fused_ordering(532) 00:17:32.253 fused_ordering(533) 00:17:32.253 fused_ordering(534) 00:17:32.253 fused_ordering(535) 00:17:32.253 fused_ordering(536) 00:17:32.253 fused_ordering(537) 00:17:32.253 fused_ordering(538) 00:17:32.253 fused_ordering(539) 00:17:32.253 fused_ordering(540) 00:17:32.253 fused_ordering(541) 00:17:32.253 fused_ordering(542) 00:17:32.253 fused_ordering(543) 00:17:32.253 fused_ordering(544) 00:17:32.253 fused_ordering(545) 00:17:32.253 fused_ordering(546) 00:17:32.253 fused_ordering(547) 00:17:32.253 fused_ordering(548) 00:17:32.253 fused_ordering(549) 00:17:32.253 fused_ordering(550) 00:17:32.253 fused_ordering(551) 00:17:32.253 fused_ordering(552) 00:17:32.253 fused_ordering(553) 00:17:32.253 fused_ordering(554) 00:17:32.253 fused_ordering(555) 00:17:32.253 fused_ordering(556) 00:17:32.253 fused_ordering(557) 00:17:32.253 fused_ordering(558) 00:17:32.253 fused_ordering(559) 00:17:32.253 fused_ordering(560) 00:17:32.253 fused_ordering(561) 00:17:32.253 fused_ordering(562) 00:17:32.253 fused_ordering(563) 00:17:32.253 fused_ordering(564) 00:17:32.253 fused_ordering(565) 00:17:32.253 fused_ordering(566) 00:17:32.253 fused_ordering(567) 00:17:32.253 fused_ordering(568) 00:17:32.253 fused_ordering(569) 00:17:32.253 fused_ordering(570) 00:17:32.253 fused_ordering(571) 00:17:32.253 fused_ordering(572) 00:17:32.253 fused_ordering(573) 00:17:32.253 fused_ordering(574) 00:17:32.253 fused_ordering(575) 00:17:32.253 fused_ordering(576) 00:17:32.253 fused_ordering(577) 00:17:32.253 fused_ordering(578) 00:17:32.253 fused_ordering(579) 00:17:32.253 fused_ordering(580) 00:17:32.253 fused_ordering(581) 00:17:32.253 fused_ordering(582) 00:17:32.253 fused_ordering(583) 00:17:32.253 fused_ordering(584) 00:17:32.253 fused_ordering(585) 00:17:32.253 fused_ordering(586) 00:17:32.253 fused_ordering(587) 00:17:32.253 fused_ordering(588) 00:17:32.253 fused_ordering(589) 00:17:32.253 fused_ordering(590) 00:17:32.253 fused_ordering(591) 00:17:32.253 fused_ordering(592) 00:17:32.253 fused_ordering(593) 00:17:32.253 fused_ordering(594) 00:17:32.253 fused_ordering(595) 00:17:32.253 fused_ordering(596) 00:17:32.253 fused_ordering(597) 00:17:32.253 fused_ordering(598) 00:17:32.253 fused_ordering(599) 00:17:32.253 fused_ordering(600) 00:17:32.253 fused_ordering(601) 00:17:32.253 fused_ordering(602) 00:17:32.253 fused_ordering(603) 00:17:32.253 fused_ordering(604) 00:17:32.253 fused_ordering(605) 00:17:32.253 fused_ordering(606) 00:17:32.253 fused_ordering(607) 00:17:32.253 fused_ordering(608) 00:17:32.253 fused_ordering(609) 00:17:32.253 fused_ordering(610) 00:17:32.253 fused_ordering(611) 00:17:32.253 fused_ordering(612) 00:17:32.253 fused_ordering(613) 00:17:32.253 fused_ordering(614) 00:17:32.253 fused_ordering(615) 00:17:32.819 fused_ordering(616) 00:17:32.819 fused_ordering(617) 00:17:32.819 fused_ordering(618) 00:17:32.819 fused_ordering(619) 00:17:32.819 fused_ordering(620) 00:17:32.819 fused_ordering(621) 00:17:32.819 fused_ordering(622) 00:17:32.819 fused_ordering(623) 00:17:32.819 fused_ordering(624) 00:17:32.819 fused_ordering(625) 00:17:32.819 fused_ordering(626) 00:17:32.819 fused_ordering(627) 00:17:32.819 fused_ordering(628) 00:17:32.819 fused_ordering(629) 00:17:32.819 fused_ordering(630) 00:17:32.819 fused_ordering(631) 00:17:32.819 fused_ordering(632) 00:17:32.819 fused_ordering(633) 00:17:32.819 fused_ordering(634) 00:17:32.819 fused_ordering(635) 00:17:32.819 fused_ordering(636) 00:17:32.819 fused_ordering(637) 00:17:32.819 fused_ordering(638) 00:17:32.819 fused_ordering(639) 00:17:32.819 fused_ordering(640) 00:17:32.819 fused_ordering(641) 00:17:32.819 fused_ordering(642) 00:17:32.819 fused_ordering(643) 00:17:32.819 fused_ordering(644) 00:17:32.819 fused_ordering(645) 00:17:32.819 fused_ordering(646) 00:17:32.819 fused_ordering(647) 00:17:32.819 fused_ordering(648) 00:17:32.819 fused_ordering(649) 00:17:32.819 fused_ordering(650) 00:17:32.819 fused_ordering(651) 00:17:32.819 fused_ordering(652) 00:17:32.819 fused_ordering(653) 00:17:32.819 fused_ordering(654) 00:17:32.819 fused_ordering(655) 00:17:32.819 fused_ordering(656) 00:17:32.819 fused_ordering(657) 00:17:32.819 fused_ordering(658) 00:17:32.819 fused_ordering(659) 00:17:32.819 fused_ordering(660) 00:17:32.819 fused_ordering(661) 00:17:32.819 fused_ordering(662) 00:17:32.819 fused_ordering(663) 00:17:32.819 fused_ordering(664) 00:17:32.819 fused_ordering(665) 00:17:32.819 fused_ordering(666) 00:17:32.819 fused_ordering(667) 00:17:32.819 fused_ordering(668) 00:17:32.819 fused_ordering(669) 00:17:32.819 fused_ordering(670) 00:17:32.819 fused_ordering(671) 00:17:32.819 fused_ordering(672) 00:17:32.819 fused_ordering(673) 00:17:32.819 fused_ordering(674) 00:17:32.819 fused_ordering(675) 00:17:32.819 fused_ordering(676) 00:17:32.819 fused_ordering(677) 00:17:32.819 fused_ordering(678) 00:17:32.819 fused_ordering(679) 00:17:32.819 fused_ordering(680) 00:17:32.819 fused_ordering(681) 00:17:32.819 fused_ordering(682) 00:17:32.819 fused_ordering(683) 00:17:32.819 fused_ordering(684) 00:17:32.819 fused_ordering(685) 00:17:32.819 fused_ordering(686) 00:17:32.819 fused_ordering(687) 00:17:32.819 fused_ordering(688) 00:17:32.819 fused_ordering(689) 00:17:32.819 fused_ordering(690) 00:17:32.819 fused_ordering(691) 00:17:32.819 fused_ordering(692) 00:17:32.819 fused_ordering(693) 00:17:32.819 fused_ordering(694) 00:17:32.819 fused_ordering(695) 00:17:32.819 fused_ordering(696) 00:17:32.819 fused_ordering(697) 00:17:32.819 fused_ordering(698) 00:17:32.819 fused_ordering(699) 00:17:32.819 fused_ordering(700) 00:17:32.819 fused_ordering(701) 00:17:32.819 fused_ordering(702) 00:17:32.819 fused_ordering(703) 00:17:32.819 fused_ordering(704) 00:17:32.819 fused_ordering(705) 00:17:32.819 fused_ordering(706) 00:17:32.819 fused_ordering(707) 00:17:32.819 fused_ordering(708) 00:17:32.819 fused_ordering(709) 00:17:32.819 fused_ordering(710) 00:17:32.819 fused_ordering(711) 00:17:32.819 fused_ordering(712) 00:17:32.819 fused_ordering(713) 00:17:32.819 fused_ordering(714) 00:17:32.819 fused_ordering(715) 00:17:32.819 fused_ordering(716) 00:17:32.819 fused_ordering(717) 00:17:32.819 fused_ordering(718) 00:17:32.819 fused_ordering(719) 00:17:32.819 fused_ordering(720) 00:17:32.819 fused_ordering(721) 00:17:32.819 fused_ordering(722) 00:17:32.819 fused_ordering(723) 00:17:32.819 fused_ordering(724) 00:17:32.819 fused_ordering(725) 00:17:32.819 fused_ordering(726) 00:17:32.819 fused_ordering(727) 00:17:32.819 fused_ordering(728) 00:17:32.819 fused_ordering(729) 00:17:32.819 fused_ordering(730) 00:17:32.819 fused_ordering(731) 00:17:32.819 fused_ordering(732) 00:17:32.819 fused_ordering(733) 00:17:32.819 fused_ordering(734) 00:17:32.819 fused_ordering(735) 00:17:32.819 fused_ordering(736) 00:17:32.819 fused_ordering(737) 00:17:32.819 fused_ordering(738) 00:17:32.819 fused_ordering(739) 00:17:32.819 fused_ordering(740) 00:17:32.819 fused_ordering(741) 00:17:32.819 fused_ordering(742) 00:17:32.819 fused_ordering(743) 00:17:32.819 fused_ordering(744) 00:17:32.819 fused_ordering(745) 00:17:32.819 fused_ordering(746) 00:17:32.819 fused_ordering(747) 00:17:32.819 fused_ordering(748) 00:17:32.819 fused_ordering(749) 00:17:32.819 fused_ordering(750) 00:17:32.819 fused_ordering(751) 00:17:32.819 fused_ordering(752) 00:17:32.819 fused_ordering(753) 00:17:32.819 fused_ordering(754) 00:17:32.819 fused_ordering(755) 00:17:32.819 fused_ordering(756) 00:17:32.819 fused_ordering(757) 00:17:32.819 fused_ordering(758) 00:17:32.819 fused_ordering(759) 00:17:32.819 fused_ordering(760) 00:17:32.820 fused_ordering(761) 00:17:32.820 fused_ordering(762) 00:17:32.820 fused_ordering(763) 00:17:32.820 fused_ordering(764) 00:17:32.820 fused_ordering(765) 00:17:32.820 fused_ordering(766) 00:17:32.820 fused_ordering(767) 00:17:32.820 fused_ordering(768) 00:17:32.820 fused_ordering(769) 00:17:32.820 fused_ordering(770) 00:17:32.820 fused_ordering(771) 00:17:32.820 fused_ordering(772) 00:17:32.820 fused_ordering(773) 00:17:32.820 fused_ordering(774) 00:17:32.820 fused_ordering(775) 00:17:32.820 fused_ordering(776) 00:17:32.820 fused_ordering(777) 00:17:32.820 fused_ordering(778) 00:17:32.820 fused_ordering(779) 00:17:32.820 fused_ordering(780) 00:17:32.820 fused_ordering(781) 00:17:32.820 fused_ordering(782) 00:17:32.820 fused_ordering(783) 00:17:32.820 fused_ordering(784) 00:17:32.820 fused_ordering(785) 00:17:32.820 fused_ordering(786) 00:17:32.820 fused_ordering(787) 00:17:32.820 fused_ordering(788) 00:17:32.820 fused_ordering(789) 00:17:32.820 fused_ordering(790) 00:17:32.820 fused_ordering(791) 00:17:32.820 fused_ordering(792) 00:17:32.820 fused_ordering(793) 00:17:32.820 fused_ordering(794) 00:17:32.820 fused_ordering(795) 00:17:32.820 fused_ordering(796) 00:17:32.820 fused_ordering(797) 00:17:32.820 fused_ordering(798) 00:17:32.820 fused_ordering(799) 00:17:32.820 fused_ordering(800) 00:17:32.820 fused_ordering(801) 00:17:32.820 fused_ordering(802) 00:17:32.820 fused_ordering(803) 00:17:32.820 fused_ordering(804) 00:17:32.820 fused_ordering(805) 00:17:32.820 fused_ordering(806) 00:17:32.820 fused_ordering(807) 00:17:32.820 fused_ordering(808) 00:17:32.820 fused_ordering(809) 00:17:32.820 fused_ordering(810) 00:17:32.820 fused_ordering(811) 00:17:32.820 fused_ordering(812) 00:17:32.820 fused_ordering(813) 00:17:32.820 fused_ordering(814) 00:17:32.820 fused_ordering(815) 00:17:32.820 fused_ordering(816) 00:17:32.820 fused_ordering(817) 00:17:32.820 fused_ordering(818) 00:17:32.820 fused_ordering(819) 00:17:32.820 fused_ordering(820) 00:17:33.386 fused_ordering(821) 00:17:33.386 fused_ordering(822) 00:17:33.386 fused_ordering(823) 00:17:33.386 fused_ordering(824) 00:17:33.386 fused_ordering(825) 00:17:33.386 fused_ordering(826) 00:17:33.386 fused_ordering(827) 00:17:33.386 fused_ordering(828) 00:17:33.386 fused_ordering(829) 00:17:33.386 fused_ordering(830) 00:17:33.386 fused_ordering(831) 00:17:33.386 fused_ordering(832) 00:17:33.386 fused_ordering(833) 00:17:33.386 fused_ordering(834) 00:17:33.386 fused_ordering(835) 00:17:33.386 fused_ordering(836) 00:17:33.386 fused_ordering(837) 00:17:33.386 fused_ordering(838) 00:17:33.386 fused_ordering(839) 00:17:33.386 fused_ordering(840) 00:17:33.386 fused_ordering(841) 00:17:33.386 fused_ordering(842) 00:17:33.386 fused_ordering(843) 00:17:33.386 fused_ordering(844) 00:17:33.386 fused_ordering(845) 00:17:33.386 fused_ordering(846) 00:17:33.386 fused_ordering(847) 00:17:33.386 fused_ordering(848) 00:17:33.386 fused_ordering(849) 00:17:33.386 fused_ordering(850) 00:17:33.386 fused_ordering(851) 00:17:33.386 fused_ordering(852) 00:17:33.386 fused_ordering(853) 00:17:33.386 fused_ordering(854) 00:17:33.386 fused_ordering(855) 00:17:33.386 fused_ordering(856) 00:17:33.386 fused_ordering(857) 00:17:33.386 fused_ordering(858) 00:17:33.386 fused_ordering(859) 00:17:33.386 fused_ordering(860) 00:17:33.386 fused_ordering(861) 00:17:33.386 fused_ordering(862) 00:17:33.386 fused_ordering(863) 00:17:33.386 fused_ordering(864) 00:17:33.386 fused_ordering(865) 00:17:33.386 fused_ordering(866) 00:17:33.386 fused_ordering(867) 00:17:33.386 fused_ordering(868) 00:17:33.386 fused_ordering(869) 00:17:33.387 fused_ordering(870) 00:17:33.387 fused_ordering(871) 00:17:33.387 fused_ordering(872) 00:17:33.387 fused_ordering(873) 00:17:33.387 fused_ordering(874) 00:17:33.387 fused_ordering(875) 00:17:33.387 fused_ordering(876) 00:17:33.387 fused_ordering(877) 00:17:33.387 fused_ordering(878) 00:17:33.387 fused_ordering(879) 00:17:33.387 fused_ordering(880) 00:17:33.387 fused_ordering(881) 00:17:33.387 fused_ordering(882) 00:17:33.387 fused_ordering(883) 00:17:33.387 fused_ordering(884) 00:17:33.387 fused_ordering(885) 00:17:33.387 fused_ordering(886) 00:17:33.387 fused_ordering(887) 00:17:33.387 fused_ordering(888) 00:17:33.387 fused_ordering(889) 00:17:33.387 fused_ordering(890) 00:17:33.387 fused_ordering(891) 00:17:33.387 fused_ordering(892) 00:17:33.387 fused_ordering(893) 00:17:33.387 fused_ordering(894) 00:17:33.387 fused_ordering(895) 00:17:33.387 fused_ordering(896) 00:17:33.387 fused_ordering(897) 00:17:33.387 fused_ordering(898) 00:17:33.387 fused_ordering(899) 00:17:33.387 fused_ordering(900) 00:17:33.387 fused_ordering(901) 00:17:33.387 fused_ordering(902) 00:17:33.387 fused_ordering(903) 00:17:33.387 fused_ordering(904) 00:17:33.387 fused_ordering(905) 00:17:33.387 fused_ordering(906) 00:17:33.387 fused_ordering(907) 00:17:33.387 fused_ordering(908) 00:17:33.387 fused_ordering(909) 00:17:33.387 fused_ordering(910) 00:17:33.387 fused_ordering(911) 00:17:33.387 fused_ordering(912) 00:17:33.387 fused_ordering(913) 00:17:33.387 fused_ordering(914) 00:17:33.387 fused_ordering(915) 00:17:33.387 fused_ordering(916) 00:17:33.387 fused_ordering(917) 00:17:33.387 fused_ordering(918) 00:17:33.387 fused_ordering(919) 00:17:33.387 fused_ordering(920) 00:17:33.387 fused_ordering(921) 00:17:33.387 fused_ordering(922) 00:17:33.387 fused_ordering(923) 00:17:33.387 fused_ordering(924) 00:17:33.387 fused_ordering(925) 00:17:33.387 fused_ordering(926) 00:17:33.387 fused_ordering(927) 00:17:33.387 fused_ordering(928) 00:17:33.387 fused_ordering(929) 00:17:33.387 fused_ordering(930) 00:17:33.387 fused_ordering(931) 00:17:33.387 fused_ordering(932) 00:17:33.387 fused_ordering(933) 00:17:33.387 fused_ordering(934) 00:17:33.387 fused_ordering(935) 00:17:33.387 fused_ordering(936) 00:17:33.387 fused_ordering(937) 00:17:33.387 fused_ordering(938) 00:17:33.387 fused_ordering(939) 00:17:33.387 fused_ordering(940) 00:17:33.387 fused_ordering(941) 00:17:33.387 fused_ordering(942) 00:17:33.387 fused_ordering(943) 00:17:33.387 fused_ordering(944) 00:17:33.387 fused_ordering(945) 00:17:33.387 fused_ordering(946) 00:17:33.387 fused_ordering(947) 00:17:33.387 fused_ordering(948) 00:17:33.387 fused_ordering(949) 00:17:33.387 fused_ordering(950) 00:17:33.387 fused_ordering(951) 00:17:33.387 fused_ordering(952) 00:17:33.387 fused_ordering(953) 00:17:33.387 fused_ordering(954) 00:17:33.387 fused_ordering(955) 00:17:33.387 fused_ordering(956) 00:17:33.387 fused_ordering(957) 00:17:33.387 fused_ordering(958) 00:17:33.387 fused_ordering(959) 00:17:33.387 fused_ordering(960) 00:17:33.387 fused_ordering(961) 00:17:33.387 fused_ordering(962) 00:17:33.387 fused_ordering(963) 00:17:33.387 fused_ordering(964) 00:17:33.387 fused_ordering(965) 00:17:33.387 fused_ordering(966) 00:17:33.387 fused_ordering(967) 00:17:33.387 fused_ordering(968) 00:17:33.387 fused_ordering(969) 00:17:33.387 fused_ordering(970) 00:17:33.387 fused_ordering(971) 00:17:33.387 fused_ordering(972) 00:17:33.387 fused_ordering(973) 00:17:33.387 fused_ordering(974) 00:17:33.387 fused_ordering(975) 00:17:33.387 fused_ordering(976) 00:17:33.387 fused_ordering(977) 00:17:33.387 fused_ordering(978) 00:17:33.387 fused_ordering(979) 00:17:33.387 fused_ordering(980) 00:17:33.387 fused_ordering(981) 00:17:33.387 fused_ordering(982) 00:17:33.387 fused_ordering(983) 00:17:33.387 fused_ordering(984) 00:17:33.387 fused_ordering(985) 00:17:33.387 fused_ordering(986) 00:17:33.387 fused_ordering(987) 00:17:33.387 fused_ordering(988) 00:17:33.387 fused_ordering(989) 00:17:33.387 fused_ordering(990) 00:17:33.387 fused_ordering(991) 00:17:33.387 fused_ordering(992) 00:17:33.387 fused_ordering(993) 00:17:33.387 fused_ordering(994) 00:17:33.387 fused_ordering(995) 00:17:33.387 fused_ordering(996) 00:17:33.387 fused_ordering(997) 00:17:33.387 fused_ordering(998) 00:17:33.387 fused_ordering(999) 00:17:33.387 fused_ordering(1000) 00:17:33.387 fused_ordering(1001) 00:17:33.387 fused_ordering(1002) 00:17:33.387 fused_ordering(1003) 00:17:33.387 fused_ordering(1004) 00:17:33.387 fused_ordering(1005) 00:17:33.387 fused_ordering(1006) 00:17:33.387 fused_ordering(1007) 00:17:33.387 fused_ordering(1008) 00:17:33.387 fused_ordering(1009) 00:17:33.387 fused_ordering(1010) 00:17:33.387 fused_ordering(1011) 00:17:33.387 fused_ordering(1012) 00:17:33.387 fused_ordering(1013) 00:17:33.387 fused_ordering(1014) 00:17:33.387 fused_ordering(1015) 00:17:33.387 fused_ordering(1016) 00:17:33.387 fused_ordering(1017) 00:17:33.387 fused_ordering(1018) 00:17:33.387 fused_ordering(1019) 00:17:33.387 fused_ordering(1020) 00:17:33.387 fused_ordering(1021) 00:17:33.387 fused_ordering(1022) 00:17:33.387 fused_ordering(1023) 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.387 rmmod nvme_tcp 00:17:33.387 rmmod nvme_fabrics 00:17:33.387 rmmod nvme_keyring 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 220751 ']' 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 220751 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 220751 ']' 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 220751 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 220751 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 220751' 00:17:33.387 killing process with pid 220751 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 220751 00:17:33.387 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 220751 00:17:33.647 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:33.647 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:33.647 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:33.647 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:33.647 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:17:33.647 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:33.647 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:17:33.647 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.647 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:33.647 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.647 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.647 22:40:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.558 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:35.558 00:17:35.558 real 0m7.301s 00:17:35.558 user 0m4.839s 00:17:35.558 sys 0m2.882s 00:17:35.558 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.558 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:35.558 ************************************ 00:17:35.558 END TEST nvmf_fused_ordering 00:17:35.558 ************************************ 00:17:35.558 22:40:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:35.558 22:40:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:35.558 22:40:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:35.558 22:40:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:35.558 ************************************ 00:17:35.558 START TEST nvmf_ns_masking 00:17:35.558 ************************************ 00:17:35.558 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:35.558 * Looking for test storage... 00:17:35.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:35.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.818 --rc genhtml_branch_coverage=1 00:17:35.818 --rc genhtml_function_coverage=1 00:17:35.818 --rc genhtml_legend=1 00:17:35.818 --rc geninfo_all_blocks=1 00:17:35.818 --rc geninfo_unexecuted_blocks=1 00:17:35.818 00:17:35.818 ' 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:35.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.818 --rc genhtml_branch_coverage=1 00:17:35.818 --rc genhtml_function_coverage=1 00:17:35.818 --rc genhtml_legend=1 00:17:35.818 --rc geninfo_all_blocks=1 00:17:35.818 --rc geninfo_unexecuted_blocks=1 00:17:35.818 00:17:35.818 ' 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:35.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.818 --rc genhtml_branch_coverage=1 00:17:35.818 --rc genhtml_function_coverage=1 00:17:35.818 --rc genhtml_legend=1 00:17:35.818 --rc geninfo_all_blocks=1 00:17:35.818 --rc geninfo_unexecuted_blocks=1 00:17:35.818 00:17:35.818 ' 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:35.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.818 --rc genhtml_branch_coverage=1 00:17:35.818 --rc genhtml_function_coverage=1 00:17:35.818 --rc genhtml_legend=1 00:17:35.818 --rc geninfo_all_blocks=1 00:17:35.818 --rc geninfo_unexecuted_blocks=1 00:17:35.818 00:17:35.818 ' 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:35.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:35.818 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4e31cd65-d6f2-4f0c-aaa5-adb0b2172631 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2a0d203f-c08c-4bf3-aa07-180d4aa0808f 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=96c3f552-f657-4af5-93de-147a9d56ad16 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:35.819 22:40:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:38.352 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:38.352 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:38.352 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:38.352 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.352 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:38.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:17:38.353 00:17:38.353 --- 10.0.0.2 ping statistics --- 00:17:38.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.353 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:17:38.353 00:17:38.353 --- 10.0.0.1 ping statistics --- 00:17:38.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.353 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=222988 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 222988 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 222988 ']' 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.353 [2024-10-11 22:40:41.228905] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:17:38.353 [2024-10-11 22:40:41.228989] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.353 [2024-10-11 22:40:41.299206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.353 [2024-10-11 22:40:41.345878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.353 [2024-10-11 22:40:41.345944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.353 [2024-10-11 22:40:41.345966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.353 [2024-10-11 22:40:41.345977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.353 [2024-10-11 22:40:41.345987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.353 [2024-10-11 22:40:41.346634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.353 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:38.611 [2024-10-11 22:40:41.779996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.611 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:38.611 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:38.611 22:40:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:38.870 Malloc1 00:17:38.870 22:40:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:39.437 Malloc2 00:17:39.437 22:40:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:39.695 22:40:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:39.953 22:40:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.210 [2024-10-11 22:40:43.380232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.210 22:40:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:40.210 22:40:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 96c3f552-f657-4af5-93de-147a9d56ad16 -a 10.0.0.2 -s 4420 -i 4 00:17:40.468 22:40:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:40.468 22:40:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:40.468 22:40:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.468 22:40:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:40.468 22:40:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:42.368 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:42.368 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:42.368 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.368 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:42.368 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.368 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:42.368 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:42.368 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:42.626 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:42.626 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:42.626 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:42.626 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.626 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:42.626 [ 0]:0x1 00:17:42.626 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:42.626 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.626 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=86aa974562e54239bfd009721f9c5687 00:17:42.626 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 86aa974562e54239bfd009721f9c5687 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.626 22:40:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:42.884 [ 0]:0x1 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=86aa974562e54239bfd009721f9c5687 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 86aa974562e54239bfd009721f9c5687 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:42.884 [ 1]:0x2 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=93cbf10e67fc4bcfa503a1edb216eebc 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 93cbf10e67fc4bcfa503a1edb216eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:42.884 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:43.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.141 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.399 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:43.657 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:43.657 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 96c3f552-f657-4af5-93de-147a9d56ad16 -a 10.0.0.2 -s 4420 -i 4 00:17:43.657 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:43.657 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:43.657 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:43.657 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:43.657 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:43.657 22:40:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.187 [ 0]:0x2 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.187 22:40:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=93cbf10e67fc4bcfa503a1edb216eebc 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 93cbf10e67fc4bcfa503a1edb216eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.187 [ 0]:0x1 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=86aa974562e54239bfd009721f9c5687 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 86aa974562e54239bfd009721f9c5687 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.187 [ 1]:0x2 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=93cbf10e67fc4bcfa503a1edb216eebc 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 93cbf10e67fc4bcfa503a1edb216eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.187 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.446 [ 0]:0x2 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.446 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.704 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=93cbf10e67fc4bcfa503a1edb216eebc 00:17:46.704 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 93cbf10e67fc4bcfa503a1edb216eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.704 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:46.704 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:46.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.704 22:40:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:46.963 22:40:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:46.963 22:40:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 96c3f552-f657-4af5-93de-147a9d56ad16 -a 10.0.0.2 -s 4420 -i 4 00:17:47.221 22:40:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:47.221 22:40:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:47.221 22:40:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:47.221 22:40:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:47.221 22:40:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:47.221 22:40:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:49.121 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:49.121 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:49.121 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:49.121 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:49.121 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:49.121 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:49.121 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:49.122 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:49.379 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:49.379 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:49.380 [ 0]:0x1 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=86aa974562e54239bfd009721f9c5687 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 86aa974562e54239bfd009721f9c5687 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:49.380 [ 1]:0x2 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=93cbf10e67fc4bcfa503a1edb216eebc 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 93cbf10e67fc4bcfa503a1edb216eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.380 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:49.638 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:49.639 [ 0]:0x2 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=93cbf10e67fc4bcfa503a1edb216eebc 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 93cbf10e67fc4bcfa503a1edb216eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:49.639 22:40:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:49.897 [2024-10-11 22:40:53.133747] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:49.897 request: 00:17:49.897 { 00:17:49.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.897 "nsid": 2, 00:17:49.897 "host": "nqn.2016-06.io.spdk:host1", 00:17:49.897 "method": "nvmf_ns_remove_host", 00:17:49.897 "req_id": 1 00:17:49.897 } 00:17:49.897 Got JSON-RPC error response 00:17:49.897 response: 00:17:49.897 { 00:17:49.897 "code": -32602, 00:17:49.897 "message": "Invalid parameters" 00:17:49.897 } 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.897 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:50.156 [ 0]:0x2 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=93cbf10e67fc4bcfa503a1edb216eebc 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 93cbf10e67fc4bcfa503a1edb216eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:50.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=224592 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 224592 /var/tmp/host.sock 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 224592 ']' 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:50.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.156 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:50.156 [2024-10-11 22:40:53.349390] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:17:50.156 [2024-10-11 22:40:53.349487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224592 ] 00:17:50.156 [2024-10-11 22:40:53.412039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.414 [2024-10-11 22:40:53.458251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.672 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:50.672 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:50.673 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:50.931 22:40:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:51.188 22:40:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4e31cd65-d6f2-4f0c-aaa5-adb0b2172631 00:17:51.188 22:40:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:17:51.188 22:40:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4E31CD65D6F24F0CAAA5ADB0B2172631 -i 00:17:51.445 22:40:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2a0d203f-c08c-4bf3-aa07-180d4aa0808f 00:17:51.445 22:40:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:17:51.445 22:40:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2A0D203FC08C4BF3AA07180D4AA0808F -i 00:17:51.703 22:40:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:51.961 22:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:52.219 22:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:52.219 22:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:52.477 nvme0n1 00:17:52.477 22:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:52.477 22:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:52.735 nvme1n2 00:17:52.735 22:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:52.735 22:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:52.735 22:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:52.735 22:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:52.735 22:40:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:53.299 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:53.299 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:53.299 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:53.299 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:53.299 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4e31cd65-d6f2-4f0c-aaa5-adb0b2172631 == \4\e\3\1\c\d\6\5\-\d\6\f\2\-\4\f\0\c\-\a\a\a\5\-\a\d\b\0\b\2\1\7\2\6\3\1 ]] 00:17:53.299 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:53.299 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:53.299 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:53.557 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2a0d203f-c08c-4bf3-aa07-180d4aa0808f == \2\a\0\d\2\0\3\f\-\c\0\8\c\-\4\b\f\3\-\a\a\0\7\-\1\8\0\d\4\a\a\0\8\0\8\f ]] 00:17:53.557 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 224592 00:17:53.557 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 224592 ']' 00:17:53.557 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 224592 00:17:53.557 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:53.557 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:53.557 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 224592 00:17:53.814 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:53.814 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:53.814 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 224592' 00:17:53.814 killing process with pid 224592 00:17:53.814 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 224592 00:17:53.814 22:40:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 224592 00:17:54.072 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.330 rmmod nvme_tcp 00:17:54.330 rmmod nvme_fabrics 00:17:54.330 rmmod nvme_keyring 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 222988 ']' 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 222988 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 222988 ']' 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 222988 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:54.330 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 222988 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 222988' 00:17:54.589 killing process with pid 222988 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 222988 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 222988 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.589 22:40:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.242 22:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:57.243 00:17:57.243 real 0m21.121s 00:17:57.243 user 0m27.998s 00:17:57.243 sys 0m4.129s 00:17:57.243 22:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.243 22:40:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:57.243 ************************************ 00:17:57.243 END TEST nvmf_ns_masking 00:17:57.243 ************************************ 00:17:57.243 22:40:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:57.243 22:40:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:57.243 22:40:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:57.243 22:40:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.243 22:40:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:57.243 ************************************ 00:17:57.243 START TEST nvmf_nvme_cli 00:17:57.243 ************************************ 00:17:57.243 22:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:57.243 * Looking for test storage... 00:17:57.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.243 22:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:57.243 22:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:17:57.243 22:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:57.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.243 --rc genhtml_branch_coverage=1 00:17:57.243 --rc genhtml_function_coverage=1 00:17:57.243 --rc genhtml_legend=1 00:17:57.243 --rc geninfo_all_blocks=1 00:17:57.243 --rc geninfo_unexecuted_blocks=1 00:17:57.243 00:17:57.243 ' 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:57.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.243 --rc genhtml_branch_coverage=1 00:17:57.243 --rc genhtml_function_coverage=1 00:17:57.243 --rc genhtml_legend=1 00:17:57.243 --rc geninfo_all_blocks=1 00:17:57.243 --rc geninfo_unexecuted_blocks=1 00:17:57.243 00:17:57.243 ' 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:57.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.243 --rc genhtml_branch_coverage=1 00:17:57.243 --rc genhtml_function_coverage=1 00:17:57.243 --rc genhtml_legend=1 00:17:57.243 --rc geninfo_all_blocks=1 00:17:57.243 --rc geninfo_unexecuted_blocks=1 00:17:57.243 00:17:57.243 ' 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:57.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.243 --rc genhtml_branch_coverage=1 00:17:57.243 --rc genhtml_function_coverage=1 00:17:57.243 --rc genhtml_legend=1 00:17:57.243 --rc geninfo_all_blocks=1 00:17:57.243 --rc geninfo_unexecuted_blocks=1 00:17:57.243 00:17:57.243 ' 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:57.243 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:57.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:57.244 22:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:59.178 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:59.178 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:59.179 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:59.179 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:59.179 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:59.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:17:59.179 00:17:59.179 --- 10.0.0.2 ping statistics --- 00:17:59.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.179 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:17:59.179 00:17:59.179 --- 10.0.0.1 ping statistics --- 00:17:59.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.179 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=227093 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 227093 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 227093 ']' 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:59.179 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.179 [2024-10-11 22:41:02.387402] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:17:59.179 [2024-10-11 22:41:02.387502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.438 [2024-10-11 22:41:02.458015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.438 [2024-10-11 22:41:02.503741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.438 [2024-10-11 22:41:02.503801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.438 [2024-10-11 22:41:02.503825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.438 [2024-10-11 22:41:02.503837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.438 [2024-10-11 22:41:02.503860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.438 [2024-10-11 22:41:02.505409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.438 [2024-10-11 22:41:02.505519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.438 [2024-10-11 22:41:02.505604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.438 [2024-10-11 22:41:02.505608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.438 [2024-10-11 22:41:02.639320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.438 Malloc0 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.438 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.438 Malloc1 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.696 [2024-10-11 22:41:02.734456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.696 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:59.697 00:17:59.697 Discovery Log Number of Records 2, Generation counter 2 00:17:59.697 =====Discovery Log Entry 0====== 00:17:59.697 trtype: tcp 00:17:59.697 adrfam: ipv4 00:17:59.697 subtype: current discovery subsystem 00:17:59.697 treq: not required 00:17:59.697 portid: 0 00:17:59.697 trsvcid: 4420 00:17:59.697 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:59.697 traddr: 10.0.0.2 00:17:59.697 eflags: explicit discovery connections, duplicate discovery information 00:17:59.697 sectype: none 00:17:59.697 =====Discovery Log Entry 1====== 00:17:59.697 trtype: tcp 00:17:59.697 adrfam: ipv4 00:17:59.697 subtype: nvme subsystem 00:17:59.697 treq: not required 00:17:59.697 portid: 0 00:17:59.697 trsvcid: 4420 00:17:59.697 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:59.697 traddr: 10.0.0.2 00:17:59.697 eflags: none 00:17:59.697 sectype: none 00:17:59.697 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:59.697 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:59.697 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:59.697 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:59.697 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:59.697 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:59.697 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:59.697 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:59.697 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:59.697 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:59.697 22:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:00.631 22:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:00.631 22:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:00.631 22:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:00.631 22:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:00.631 22:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:00.631 22:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:02.532 /dev/nvme0n2 ]] 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:02.532 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:18:02.789 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:18:02.789 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:02.789 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:18:02.789 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:02.789 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:02.789 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:18:02.789 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:02.789 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:02.789 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:18:02.789 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:02.789 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:02.789 22:41:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:03.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:03.047 rmmod nvme_tcp 00:18:03.047 rmmod nvme_fabrics 00:18:03.047 rmmod nvme_keyring 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 227093 ']' 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 227093 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 227093 ']' 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 227093 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 227093 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 227093' 00:18:03.047 killing process with pid 227093 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 227093 00:18:03.047 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 227093 00:18:03.307 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:03.307 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:03.307 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:03.307 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:03.307 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:18:03.307 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:03.307 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:18:03.307 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:03.307 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:03.307 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.307 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.307 22:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:05.848 00:18:05.848 real 0m8.606s 00:18:05.848 user 0m16.517s 00:18:05.848 sys 0m2.315s 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.848 ************************************ 00:18:05.848 END TEST nvmf_nvme_cli 00:18:05.848 ************************************ 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:05.848 ************************************ 00:18:05.848 START TEST nvmf_vfio_user 00:18:05.848 ************************************ 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:05.848 * Looking for test storage... 00:18:05.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.848 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:05.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.849 --rc genhtml_branch_coverage=1 00:18:05.849 --rc genhtml_function_coverage=1 00:18:05.849 --rc genhtml_legend=1 00:18:05.849 --rc geninfo_all_blocks=1 00:18:05.849 --rc geninfo_unexecuted_blocks=1 00:18:05.849 00:18:05.849 ' 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:05.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.849 --rc genhtml_branch_coverage=1 00:18:05.849 --rc genhtml_function_coverage=1 00:18:05.849 --rc genhtml_legend=1 00:18:05.849 --rc geninfo_all_blocks=1 00:18:05.849 --rc geninfo_unexecuted_blocks=1 00:18:05.849 00:18:05.849 ' 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:05.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.849 --rc genhtml_branch_coverage=1 00:18:05.849 --rc genhtml_function_coverage=1 00:18:05.849 --rc genhtml_legend=1 00:18:05.849 --rc geninfo_all_blocks=1 00:18:05.849 --rc geninfo_unexecuted_blocks=1 00:18:05.849 00:18:05.849 ' 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:05.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.849 --rc genhtml_branch_coverage=1 00:18:05.849 --rc genhtml_function_coverage=1 00:18:05.849 --rc genhtml_legend=1 00:18:05.849 --rc geninfo_all_blocks=1 00:18:05.849 --rc geninfo_unexecuted_blocks=1 00:18:05.849 00:18:05.849 ' 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:05.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=228023 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 228023' 00:18:05.849 Process pid: 228023 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 228023 00:18:05.849 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 228023 ']' 00:18:05.850 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.850 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.850 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.850 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.850 22:41:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:05.850 [2024-10-11 22:41:08.804077] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:18:05.850 [2024-10-11 22:41:08.804150] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.850 [2024-10-11 22:41:08.862328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:05.850 [2024-10-11 22:41:08.907818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.850 [2024-10-11 22:41:08.907886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.850 [2024-10-11 22:41:08.907908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.850 [2024-10-11 22:41:08.907919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.850 [2024-10-11 22:41:08.907929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.850 [2024-10-11 22:41:08.909341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.850 [2024-10-11 22:41:08.909447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.850 [2024-10-11 22:41:08.909534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.850 [2024-10-11 22:41:08.909537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.850 22:41:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.850 22:41:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:05.850 22:41:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:06.783 22:41:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:07.349 22:41:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:07.349 22:41:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:07.349 22:41:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:07.349 22:41:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:07.349 22:41:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:07.349 Malloc1 00:18:07.607 22:41:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:07.865 22:41:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:08.123 22:41:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:08.381 22:41:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:08.381 22:41:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:08.381 22:41:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:08.640 Malloc2 00:18:08.640 22:41:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:08.898 22:41:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:09.156 22:41:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:09.416 22:41:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:09.416 22:41:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:09.416 22:41:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:09.416 22:41:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:09.416 22:41:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:09.416 22:41:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:09.416 [2024-10-11 22:41:12.551654] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:18:09.416 [2024-10-11 22:41:12.551697] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228445 ] 00:18:09.416 [2024-10-11 22:41:12.583384] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:09.416 [2024-10-11 22:41:12.596071] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:09.416 [2024-10-11 22:41:12.596100] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff771107000 00:18:09.416 [2024-10-11 22:41:12.597068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.416 [2024-10-11 22:41:12.598062] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.416 [2024-10-11 22:41:12.599067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.417 [2024-10-11 22:41:12.600069] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:09.417 [2024-10-11 22:41:12.601077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:09.417 [2024-10-11 22:41:12.602083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.417 [2024-10-11 22:41:12.603089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:09.417 [2024-10-11 22:41:12.604095] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.417 [2024-10-11 22:41:12.605102] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:09.417 [2024-10-11 22:41:12.605122] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff76f5f5000 00:18:09.417 [2024-10-11 22:41:12.606285] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:09.417 [2024-10-11 22:41:12.621911] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:09.417 [2024-10-11 22:41:12.621947] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:18:09.417 [2024-10-11 22:41:12.624219] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:09.417 [2024-10-11 22:41:12.624274] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:09.417 [2024-10-11 22:41:12.624367] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:18:09.417 [2024-10-11 22:41:12.624395] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:18:09.417 [2024-10-11 22:41:12.624405] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:18:09.417 [2024-10-11 22:41:12.625214] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:09.417 [2024-10-11 22:41:12.625233] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:18:09.417 [2024-10-11 22:41:12.625245] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:18:09.417 [2024-10-11 22:41:12.626217] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:09.417 [2024-10-11 22:41:12.626237] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:18:09.417 [2024-10-11 22:41:12.626257] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:18:09.417 [2024-10-11 22:41:12.627221] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:09.417 [2024-10-11 22:41:12.627239] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:09.417 [2024-10-11 22:41:12.628229] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:09.417 [2024-10-11 22:41:12.628247] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:18:09.417 [2024-10-11 22:41:12.628255] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:18:09.417 [2024-10-11 22:41:12.628266] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:09.417 [2024-10-11 22:41:12.628375] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:18:09.417 [2024-10-11 22:41:12.628382] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:09.417 [2024-10-11 22:41:12.628391] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:09.417 [2024-10-11 22:41:12.629232] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:09.417 [2024-10-11 22:41:12.630234] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:09.417 [2024-10-11 22:41:12.631242] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:09.417 [2024-10-11 22:41:12.632241] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:09.417 [2024-10-11 22:41:12.632337] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:09.417 [2024-10-11 22:41:12.633264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:09.417 [2024-10-11 22:41:12.633281] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:09.417 [2024-10-11 22:41:12.633290] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633313] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:18:09.417 [2024-10-11 22:41:12.633326] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633352] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:09.417 [2024-10-11 22:41:12.633362] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.417 [2024-10-11 22:41:12.633368] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.417 [2024-10-11 22:41:12.633386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.417 [2024-10-11 22:41:12.633461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:09.417 [2024-10-11 22:41:12.633480] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:18:09.417 [2024-10-11 22:41:12.633489] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:18:09.417 [2024-10-11 22:41:12.633495] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:18:09.417 [2024-10-11 22:41:12.633503] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:09.417 [2024-10-11 22:41:12.633510] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:18:09.417 [2024-10-11 22:41:12.633516] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:18:09.417 [2024-10-11 22:41:12.633524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:09.417 [2024-10-11 22:41:12.633602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:09.417 [2024-10-11 22:41:12.633618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.417 [2024-10-11 22:41:12.633631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.417 [2024-10-11 22:41:12.633642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.417 [2024-10-11 22:41:12.633654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.417 [2024-10-11 22:41:12.633662] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633680] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:09.417 [2024-10-11 22:41:12.633708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:09.417 [2024-10-11 22:41:12.633718] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:18:09.417 [2024-10-11 22:41:12.633726] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633737] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633750] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:09.417 [2024-10-11 22:41:12.633779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:09.417 [2024-10-11 22:41:12.633859] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633878] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633892] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:09.417 [2024-10-11 22:41:12.633900] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:09.417 [2024-10-11 22:41:12.633906] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.417 [2024-10-11 22:41:12.633915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:09.417 [2024-10-11 22:41:12.633931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:09.417 [2024-10-11 22:41:12.633946] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:18:09.417 [2024-10-11 22:41:12.633965] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633979] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:18:09.417 [2024-10-11 22:41:12.633990] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:09.417 [2024-10-11 22:41:12.633998] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.417 [2024-10-11 22:41:12.634004] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.417 [2024-10-11 22:41:12.634012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.417 [2024-10-11 22:41:12.634046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:09.417 [2024-10-11 22:41:12.634066] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:09.418 [2024-10-11 22:41:12.634080] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:09.418 [2024-10-11 22:41:12.634091] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:09.418 [2024-10-11 22:41:12.634099] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.418 [2024-10-11 22:41:12.634105] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.418 [2024-10-11 22:41:12.634114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.418 [2024-10-11 22:41:12.634125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:09.418 [2024-10-11 22:41:12.634137] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:09.418 [2024-10-11 22:41:12.634148] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:18:09.418 [2024-10-11 22:41:12.634161] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:18:09.418 [2024-10-11 22:41:12.634171] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:09.418 [2024-10-11 22:41:12.634179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:09.418 [2024-10-11 22:41:12.634190] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:18:09.418 [2024-10-11 22:41:12.634198] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:18:09.418 [2024-10-11 22:41:12.634205] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:18:09.418 [2024-10-11 22:41:12.634213] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:18:09.418 [2024-10-11 22:41:12.634238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:09.418 [2024-10-11 22:41:12.634252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:09.418 [2024-10-11 22:41:12.634270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:09.418 [2024-10-11 22:41:12.634281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:09.418 [2024-10-11 22:41:12.634297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:09.418 [2024-10-11 22:41:12.634309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:09.418 [2024-10-11 22:41:12.634324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:09.418 [2024-10-11 22:41:12.634336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:09.418 [2024-10-11 22:41:12.634358] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:09.418 [2024-10-11 22:41:12.634368] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:09.418 [2024-10-11 22:41:12.634374] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:09.418 [2024-10-11 22:41:12.634380] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:09.418 [2024-10-11 22:41:12.634385] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:09.418 [2024-10-11 22:41:12.634394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:09.418 [2024-10-11 22:41:12.634405] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:09.418 [2024-10-11 22:41:12.634413] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:09.418 [2024-10-11 22:41:12.634418] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.418 [2024-10-11 22:41:12.634426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:09.418 [2024-10-11 22:41:12.634437] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:09.418 [2024-10-11 22:41:12.634444] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.418 [2024-10-11 22:41:12.634450] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.418 [2024-10-11 22:41:12.634458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.418 [2024-10-11 22:41:12.634469] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:09.418 [2024-10-11 22:41:12.634477] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:09.418 [2024-10-11 22:41:12.634486] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.418 [2024-10-11 22:41:12.634495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:09.418 [2024-10-11 22:41:12.634506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:09.418 [2024-10-11 22:41:12.634525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:09.418 [2024-10-11 22:41:12.634542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:09.418 [2024-10-11 22:41:12.634579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:09.418 ===================================================== 00:18:09.418 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:09.418 ===================================================== 00:18:09.418 Controller Capabilities/Features 00:18:09.418 ================================ 00:18:09.418 Vendor ID: 4e58 00:18:09.418 Subsystem Vendor ID: 4e58 00:18:09.418 Serial Number: SPDK1 00:18:09.418 Model Number: SPDK bdev Controller 00:18:09.418 Firmware Version: 25.01 00:18:09.418 Recommended Arb Burst: 6 00:18:09.418 IEEE OUI Identifier: 8d 6b 50 00:18:09.418 Multi-path I/O 00:18:09.418 May have multiple subsystem ports: Yes 00:18:09.418 May have multiple controllers: Yes 00:18:09.418 Associated with SR-IOV VF: No 00:18:09.418 Max Data Transfer Size: 131072 00:18:09.418 Max Number of Namespaces: 32 00:18:09.418 Max Number of I/O Queues: 127 00:18:09.418 NVMe Specification Version (VS): 1.3 00:18:09.418 NVMe Specification Version (Identify): 1.3 00:18:09.418 Maximum Queue Entries: 256 00:18:09.418 Contiguous Queues Required: Yes 00:18:09.418 Arbitration Mechanisms Supported 00:18:09.418 Weighted Round Robin: Not Supported 00:18:09.418 Vendor Specific: Not Supported 00:18:09.418 Reset Timeout: 15000 ms 00:18:09.418 Doorbell Stride: 4 bytes 00:18:09.418 NVM Subsystem Reset: Not Supported 00:18:09.418 Command Sets Supported 00:18:09.418 NVM Command Set: Supported 00:18:09.418 Boot Partition: Not Supported 00:18:09.418 Memory Page Size Minimum: 4096 bytes 00:18:09.418 Memory Page Size Maximum: 4096 bytes 00:18:09.418 Persistent Memory Region: Not Supported 00:18:09.418 Optional Asynchronous Events Supported 00:18:09.418 Namespace Attribute Notices: Supported 00:18:09.418 Firmware Activation Notices: Not Supported 00:18:09.418 ANA Change Notices: Not Supported 00:18:09.418 PLE Aggregate Log Change Notices: Not Supported 00:18:09.418 LBA Status Info Alert Notices: Not Supported 00:18:09.418 EGE Aggregate Log Change Notices: Not Supported 00:18:09.418 Normal NVM Subsystem Shutdown event: Not Supported 00:18:09.418 Zone Descriptor Change Notices: Not Supported 00:18:09.418 Discovery Log Change Notices: Not Supported 00:18:09.418 Controller Attributes 00:18:09.418 128-bit Host Identifier: Supported 00:18:09.418 Non-Operational Permissive Mode: Not Supported 00:18:09.418 NVM Sets: Not Supported 00:18:09.418 Read Recovery Levels: Not Supported 00:18:09.418 Endurance Groups: Not Supported 00:18:09.418 Predictable Latency Mode: Not Supported 00:18:09.418 Traffic Based Keep ALive: Not Supported 00:18:09.418 Namespace Granularity: Not Supported 00:18:09.418 SQ Associations: Not Supported 00:18:09.418 UUID List: Not Supported 00:18:09.418 Multi-Domain Subsystem: Not Supported 00:18:09.418 Fixed Capacity Management: Not Supported 00:18:09.418 Variable Capacity Management: Not Supported 00:18:09.418 Delete Endurance Group: Not Supported 00:18:09.418 Delete NVM Set: Not Supported 00:18:09.418 Extended LBA Formats Supported: Not Supported 00:18:09.418 Flexible Data Placement Supported: Not Supported 00:18:09.418 00:18:09.418 Controller Memory Buffer Support 00:18:09.418 ================================ 00:18:09.418 Supported: No 00:18:09.418 00:18:09.418 Persistent Memory Region Support 00:18:09.418 ================================ 00:18:09.418 Supported: No 00:18:09.418 00:18:09.418 Admin Command Set Attributes 00:18:09.418 ============================ 00:18:09.418 Security Send/Receive: Not Supported 00:18:09.418 Format NVM: Not Supported 00:18:09.418 Firmware Activate/Download: Not Supported 00:18:09.418 Namespace Management: Not Supported 00:18:09.418 Device Self-Test: Not Supported 00:18:09.418 Directives: Not Supported 00:18:09.418 NVMe-MI: Not Supported 00:18:09.418 Virtualization Management: Not Supported 00:18:09.418 Doorbell Buffer Config: Not Supported 00:18:09.418 Get LBA Status Capability: Not Supported 00:18:09.418 Command & Feature Lockdown Capability: Not Supported 00:18:09.418 Abort Command Limit: 4 00:18:09.418 Async Event Request Limit: 4 00:18:09.418 Number of Firmware Slots: N/A 00:18:09.418 Firmware Slot 1 Read-Only: N/A 00:18:09.418 Firmware Activation Without Reset: N/A 00:18:09.418 Multiple Update Detection Support: N/A 00:18:09.418 Firmware Update Granularity: No Information Provided 00:18:09.418 Per-Namespace SMART Log: No 00:18:09.418 Asymmetric Namespace Access Log Page: Not Supported 00:18:09.418 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:09.418 Command Effects Log Page: Supported 00:18:09.418 Get Log Page Extended Data: Supported 00:18:09.418 Telemetry Log Pages: Not Supported 00:18:09.418 Persistent Event Log Pages: Not Supported 00:18:09.418 Supported Log Pages Log Page: May Support 00:18:09.418 Commands Supported & Effects Log Page: Not Supported 00:18:09.418 Feature Identifiers & Effects Log Page:May Support 00:18:09.419 NVMe-MI Commands & Effects Log Page: May Support 00:18:09.419 Data Area 4 for Telemetry Log: Not Supported 00:18:09.419 Error Log Page Entries Supported: 128 00:18:09.419 Keep Alive: Supported 00:18:09.419 Keep Alive Granularity: 10000 ms 00:18:09.419 00:18:09.419 NVM Command Set Attributes 00:18:09.419 ========================== 00:18:09.419 Submission Queue Entry Size 00:18:09.419 Max: 64 00:18:09.419 Min: 64 00:18:09.419 Completion Queue Entry Size 00:18:09.419 Max: 16 00:18:09.419 Min: 16 00:18:09.419 Number of Namespaces: 32 00:18:09.419 Compare Command: Supported 00:18:09.419 Write Uncorrectable Command: Not Supported 00:18:09.419 Dataset Management Command: Supported 00:18:09.419 Write Zeroes Command: Supported 00:18:09.419 Set Features Save Field: Not Supported 00:18:09.419 Reservations: Not Supported 00:18:09.419 Timestamp: Not Supported 00:18:09.419 Copy: Supported 00:18:09.419 Volatile Write Cache: Present 00:18:09.419 Atomic Write Unit (Normal): 1 00:18:09.419 Atomic Write Unit (PFail): 1 00:18:09.419 Atomic Compare & Write Unit: 1 00:18:09.419 Fused Compare & Write: Supported 00:18:09.419 Scatter-Gather List 00:18:09.419 SGL Command Set: Supported (Dword aligned) 00:18:09.419 SGL Keyed: Not Supported 00:18:09.419 SGL Bit Bucket Descriptor: Not Supported 00:18:09.419 SGL Metadata Pointer: Not Supported 00:18:09.419 Oversized SGL: Not Supported 00:18:09.419 SGL Metadata Address: Not Supported 00:18:09.419 SGL Offset: Not Supported 00:18:09.419 Transport SGL Data Block: Not Supported 00:18:09.419 Replay Protected Memory Block: Not Supported 00:18:09.419 00:18:09.419 Firmware Slot Information 00:18:09.419 ========================= 00:18:09.419 Active slot: 1 00:18:09.419 Slot 1 Firmware Revision: 25.01 00:18:09.419 00:18:09.419 00:18:09.419 Commands Supported and Effects 00:18:09.419 ============================== 00:18:09.419 Admin Commands 00:18:09.419 -------------- 00:18:09.419 Get Log Page (02h): Supported 00:18:09.419 Identify (06h): Supported 00:18:09.419 Abort (08h): Supported 00:18:09.419 Set Features (09h): Supported 00:18:09.419 Get Features (0Ah): Supported 00:18:09.419 Asynchronous Event Request (0Ch): Supported 00:18:09.419 Keep Alive (18h): Supported 00:18:09.419 I/O Commands 00:18:09.419 ------------ 00:18:09.419 Flush (00h): Supported LBA-Change 00:18:09.419 Write (01h): Supported LBA-Change 00:18:09.419 Read (02h): Supported 00:18:09.419 Compare (05h): Supported 00:18:09.419 Write Zeroes (08h): Supported LBA-Change 00:18:09.419 Dataset Management (09h): Supported LBA-Change 00:18:09.419 Copy (19h): Supported LBA-Change 00:18:09.419 00:18:09.419 Error Log 00:18:09.419 ========= 00:18:09.419 00:18:09.419 Arbitration 00:18:09.419 =========== 00:18:09.419 Arbitration Burst: 1 00:18:09.419 00:18:09.419 Power Management 00:18:09.419 ================ 00:18:09.419 Number of Power States: 1 00:18:09.419 Current Power State: Power State #0 00:18:09.419 Power State #0: 00:18:09.419 Max Power: 0.00 W 00:18:09.419 Non-Operational State: Operational 00:18:09.419 Entry Latency: Not Reported 00:18:09.419 Exit Latency: Not Reported 00:18:09.419 Relative Read Throughput: 0 00:18:09.419 Relative Read Latency: 0 00:18:09.419 Relative Write Throughput: 0 00:18:09.419 Relative Write Latency: 0 00:18:09.419 Idle Power: Not Reported 00:18:09.419 Active Power: Not Reported 00:18:09.419 Non-Operational Permissive Mode: Not Supported 00:18:09.419 00:18:09.419 Health Information 00:18:09.419 ================== 00:18:09.419 Critical Warnings: 00:18:09.419 Available Spare Space: OK 00:18:09.419 Temperature: OK 00:18:09.419 Device Reliability: OK 00:18:09.419 Read Only: No 00:18:09.419 Volatile Memory Backup: OK 00:18:09.419 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:09.419 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:09.419 Available Spare: 0% 00:18:09.419 Available Sp[2024-10-11 22:41:12.634710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:09.419 [2024-10-11 22:41:12.634727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:09.419 [2024-10-11 22:41:12.634773] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:18:09.419 [2024-10-11 22:41:12.634790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.419 [2024-10-11 22:41:12.634801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.419 [2024-10-11 22:41:12.634810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.419 [2024-10-11 22:41:12.634820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.419 [2024-10-11 22:41:12.637564] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:09.419 [2024-10-11 22:41:12.637586] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:09.419 [2024-10-11 22:41:12.638285] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:09.419 [2024-10-11 22:41:12.638355] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:18:09.419 [2024-10-11 22:41:12.638368] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:18:09.419 [2024-10-11 22:41:12.639298] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:09.419 [2024-10-11 22:41:12.639320] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:18:09.419 [2024-10-11 22:41:12.639377] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:09.419 [2024-10-11 22:41:12.642560] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:09.677 are Threshold: 0% 00:18:09.677 Life Percentage Used: 0% 00:18:09.677 Data Units Read: 0 00:18:09.677 Data Units Written: 0 00:18:09.677 Host Read Commands: 0 00:18:09.677 Host Write Commands: 0 00:18:09.677 Controller Busy Time: 0 minutes 00:18:09.677 Power Cycles: 0 00:18:09.677 Power On Hours: 0 hours 00:18:09.677 Unsafe Shutdowns: 0 00:18:09.677 Unrecoverable Media Errors: 0 00:18:09.677 Lifetime Error Log Entries: 0 00:18:09.677 Warning Temperature Time: 0 minutes 00:18:09.677 Critical Temperature Time: 0 minutes 00:18:09.677 00:18:09.677 Number of Queues 00:18:09.677 ================ 00:18:09.677 Number of I/O Submission Queues: 127 00:18:09.677 Number of I/O Completion Queues: 127 00:18:09.677 00:18:09.677 Active Namespaces 00:18:09.677 ================= 00:18:09.677 Namespace ID:1 00:18:09.677 Error Recovery Timeout: Unlimited 00:18:09.677 Command Set Identifier: NVM (00h) 00:18:09.677 Deallocate: Supported 00:18:09.677 Deallocated/Unwritten Error: Not Supported 00:18:09.677 Deallocated Read Value: Unknown 00:18:09.677 Deallocate in Write Zeroes: Not Supported 00:18:09.677 Deallocated Guard Field: 0xFFFF 00:18:09.677 Flush: Supported 00:18:09.677 Reservation: Supported 00:18:09.677 Namespace Sharing Capabilities: Multiple Controllers 00:18:09.677 Size (in LBAs): 131072 (0GiB) 00:18:09.677 Capacity (in LBAs): 131072 (0GiB) 00:18:09.677 Utilization (in LBAs): 131072 (0GiB) 00:18:09.677 NGUID: 3B4C522334BE42F087926630CBB5C678 00:18:09.677 UUID: 3b4c5223-34be-42f0-8792-6630cbb5c678 00:18:09.677 Thin Provisioning: Not Supported 00:18:09.677 Per-NS Atomic Units: Yes 00:18:09.677 Atomic Boundary Size (Normal): 0 00:18:09.677 Atomic Boundary Size (PFail): 0 00:18:09.677 Atomic Boundary Offset: 0 00:18:09.677 Maximum Single Source Range Length: 65535 00:18:09.677 Maximum Copy Length: 65535 00:18:09.677 Maximum Source Range Count: 1 00:18:09.677 NGUID/EUI64 Never Reused: No 00:18:09.678 Namespace Write Protected: No 00:18:09.678 Number of LBA Formats: 1 00:18:09.678 Current LBA Format: LBA Format #00 00:18:09.678 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:09.678 00:18:09.678 22:41:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:09.678 [2024-10-11 22:41:12.873444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:14.944 Initializing NVMe Controllers 00:18:14.944 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:14.944 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:14.944 Initialization complete. Launching workers. 00:18:14.944 ======================================================== 00:18:14.944 Latency(us) 00:18:14.944 Device Information : IOPS MiB/s Average min max 00:18:14.944 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32635.42 127.48 3921.64 1199.57 9268.64 00:18:14.944 ======================================================== 00:18:14.944 Total : 32635.42 127.48 3921.64 1199.57 9268.64 00:18:14.944 00:18:14.944 [2024-10-11 22:41:17.894061] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:14.944 22:41:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:14.944 [2024-10-11 22:41:18.129219] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:20.210 Initializing NVMe Controllers 00:18:20.210 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:20.210 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:20.210 Initialization complete. Launching workers. 00:18:20.210 ======================================================== 00:18:20.210 Latency(us) 00:18:20.210 Device Information : IOPS MiB/s Average min max 00:18:20.210 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15948.80 62.30 8034.13 4928.46 15985.39 00:18:20.210 ======================================================== 00:18:20.210 Total : 15948.80 62.30 8034.13 4928.46 15985.39 00:18:20.210 00:18:20.210 [2024-10-11 22:41:23.164286] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:20.210 22:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:20.210 [2024-10-11 22:41:23.374301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:25.477 [2024-10-11 22:41:28.439887] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:25.477 Initializing NVMe Controllers 00:18:25.477 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:25.477 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:25.477 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:25.477 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:25.477 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:25.477 Initialization complete. Launching workers. 00:18:25.477 Starting thread on core 2 00:18:25.477 Starting thread on core 3 00:18:25.477 Starting thread on core 1 00:18:25.477 22:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:25.477 [2024-10-11 22:41:28.740031] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:28.764 [2024-10-11 22:41:31.806902] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:28.764 Initializing NVMe Controllers 00:18:28.764 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:28.764 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:28.764 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:28.764 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:28.764 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:28.764 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:28.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:28.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:28.764 Initialization complete. Launching workers. 00:18:28.764 Starting thread on core 1 with urgent priority queue 00:18:28.764 Starting thread on core 2 with urgent priority queue 00:18:28.764 Starting thread on core 3 with urgent priority queue 00:18:28.764 Starting thread on core 0 with urgent priority queue 00:18:28.764 SPDK bdev Controller (SPDK1 ) core 0: 5215.67 IO/s 19.17 secs/100000 ios 00:18:28.764 SPDK bdev Controller (SPDK1 ) core 1: 5159.00 IO/s 19.38 secs/100000 ios 00:18:28.764 SPDK bdev Controller (SPDK1 ) core 2: 6157.33 IO/s 16.24 secs/100000 ios 00:18:28.764 SPDK bdev Controller (SPDK1 ) core 3: 5971.00 IO/s 16.75 secs/100000 ios 00:18:28.764 ======================================================== 00:18:28.764 00:18:28.764 22:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:29.023 [2024-10-11 22:41:32.094283] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:29.023 Initializing NVMe Controllers 00:18:29.023 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:29.023 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:29.023 Namespace ID: 1 size: 0GB 00:18:29.023 Initialization complete. 00:18:29.023 INFO: using host memory buffer for IO 00:18:29.023 Hello world! 00:18:29.023 [2024-10-11 22:41:32.126924] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:29.023 22:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:29.281 [2024-10-11 22:41:32.409153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:30.216 Initializing NVMe Controllers 00:18:30.216 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:30.216 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:30.216 Initialization complete. Launching workers. 00:18:30.216 submit (in ns) avg, min, max = 8950.3, 3505.6, 4016673.3 00:18:30.216 complete (in ns) avg, min, max = 24120.2, 2058.9, 4015473.3 00:18:30.216 00:18:30.216 Submit histogram 00:18:30.216 ================ 00:18:30.216 Range in us Cumulative Count 00:18:30.216 3.484 - 3.508: 0.0157% ( 2) 00:18:30.216 3.508 - 3.532: 0.1801% ( 21) 00:18:30.216 3.532 - 3.556: 0.8063% ( 80) 00:18:30.216 3.556 - 3.579: 2.5599% ( 224) 00:18:30.216 3.579 - 3.603: 6.2784% ( 475) 00:18:30.216 3.603 - 3.627: 12.9012% ( 846) 00:18:30.216 3.627 - 3.650: 21.1602% ( 1055) 00:18:30.216 3.650 - 3.674: 29.2860% ( 1038) 00:18:30.216 3.674 - 3.698: 37.0831% ( 996) 00:18:30.216 3.698 - 3.721: 44.4027% ( 935) 00:18:30.216 3.721 - 3.745: 50.7124% ( 806) 00:18:30.216 3.745 - 3.769: 56.2940% ( 713) 00:18:30.216 3.769 - 3.793: 61.1242% ( 617) 00:18:30.216 3.793 - 3.816: 64.9836% ( 493) 00:18:30.216 3.816 - 3.840: 68.7490% ( 481) 00:18:30.216 3.840 - 3.864: 72.5536% ( 486) 00:18:30.216 3.864 - 3.887: 76.3347% ( 483) 00:18:30.216 3.887 - 3.911: 79.9358% ( 460) 00:18:30.216 3.911 - 3.935: 83.0828% ( 402) 00:18:30.216 3.935 - 3.959: 85.5957% ( 321) 00:18:30.216 3.959 - 3.982: 87.8581% ( 289) 00:18:30.216 3.982 - 4.006: 89.7370% ( 240) 00:18:30.216 4.006 - 4.030: 91.0835% ( 172) 00:18:30.216 4.030 - 4.053: 92.2186% ( 145) 00:18:30.216 4.053 - 4.077: 93.1345% ( 117) 00:18:30.216 4.077 - 4.101: 93.8234% ( 88) 00:18:30.216 4.101 - 4.124: 94.5906% ( 98) 00:18:30.216 4.124 - 4.148: 95.2325% ( 82) 00:18:30.216 4.148 - 4.172: 95.6239% ( 50) 00:18:30.216 4.172 - 4.196: 96.0310% ( 52) 00:18:30.216 4.196 - 4.219: 96.2815% ( 32) 00:18:30.216 4.219 - 4.243: 96.4772% ( 25) 00:18:30.216 4.243 - 4.267: 96.6573% ( 23) 00:18:30.216 4.267 - 4.290: 96.7825% ( 16) 00:18:30.216 4.290 - 4.314: 96.8530% ( 9) 00:18:30.216 4.314 - 4.338: 96.9548% ( 13) 00:18:30.216 4.338 - 4.361: 97.0252% ( 9) 00:18:30.216 4.361 - 4.385: 97.0800% ( 7) 00:18:30.216 4.385 - 4.409: 97.1818% ( 13) 00:18:30.216 4.409 - 4.433: 97.2601% ( 10) 00:18:30.216 4.433 - 4.456: 97.3383% ( 10) 00:18:30.216 4.456 - 4.480: 97.4088% ( 9) 00:18:30.216 4.480 - 4.504: 97.4636% ( 7) 00:18:30.216 4.504 - 4.527: 97.4949% ( 4) 00:18:30.216 4.527 - 4.551: 97.5106% ( 2) 00:18:30.216 4.551 - 4.575: 97.5184% ( 1) 00:18:30.216 4.575 - 4.599: 97.5262% ( 1) 00:18:30.216 4.599 - 4.622: 97.5341% ( 1) 00:18:30.216 4.670 - 4.693: 97.5497% ( 2) 00:18:30.216 4.693 - 4.717: 97.5654% ( 2) 00:18:30.216 4.717 - 4.741: 97.5732% ( 1) 00:18:30.216 4.741 - 4.764: 97.5889% ( 2) 00:18:30.216 4.764 - 4.788: 97.6123% ( 3) 00:18:30.216 4.788 - 4.812: 97.6437% ( 4) 00:18:30.216 4.812 - 4.836: 97.6828% ( 5) 00:18:30.216 4.836 - 4.859: 97.7454% ( 8) 00:18:30.216 4.859 - 4.883: 97.8002% ( 7) 00:18:30.216 4.883 - 4.907: 97.8628% ( 8) 00:18:30.216 4.907 - 4.930: 97.9333% ( 9) 00:18:30.216 4.930 - 4.954: 97.9646% ( 4) 00:18:30.216 4.954 - 4.978: 98.0194% ( 7) 00:18:30.216 4.978 - 5.001: 98.0664% ( 6) 00:18:30.216 5.001 - 5.025: 98.1055% ( 5) 00:18:30.216 5.025 - 5.049: 98.1290% ( 3) 00:18:30.216 5.049 - 5.073: 98.1525% ( 3) 00:18:30.216 5.073 - 5.096: 98.1916% ( 5) 00:18:30.216 5.096 - 5.120: 98.2073% ( 2) 00:18:30.216 5.120 - 5.144: 98.2621% ( 7) 00:18:30.216 5.144 - 5.167: 98.2778% ( 2) 00:18:30.216 5.167 - 5.191: 98.3012% ( 3) 00:18:30.216 5.191 - 5.215: 98.3247% ( 3) 00:18:30.216 5.239 - 5.262: 98.3482% ( 3) 00:18:30.216 5.262 - 5.286: 98.3717% ( 3) 00:18:30.216 5.286 - 5.310: 98.3952% ( 3) 00:18:30.216 5.357 - 5.381: 98.4030% ( 1) 00:18:30.216 5.381 - 5.404: 98.4187% ( 2) 00:18:30.216 5.428 - 5.452: 98.4265% ( 1) 00:18:30.216 5.570 - 5.594: 98.4421% ( 2) 00:18:30.216 5.665 - 5.689: 98.4500% ( 1) 00:18:30.216 5.689 - 5.713: 98.4578% ( 1) 00:18:30.216 5.736 - 5.760: 98.4656% ( 1) 00:18:30.216 6.044 - 6.068: 98.4735% ( 1) 00:18:30.216 6.400 - 6.447: 98.4813% ( 1) 00:18:30.216 6.495 - 6.542: 98.4891% ( 1) 00:18:30.216 7.064 - 7.111: 98.4969% ( 1) 00:18:30.216 7.111 - 7.159: 98.5048% ( 1) 00:18:30.216 7.159 - 7.206: 98.5126% ( 1) 00:18:30.216 7.206 - 7.253: 98.5204% ( 1) 00:18:30.216 7.301 - 7.348: 98.5283% ( 1) 00:18:30.216 7.396 - 7.443: 98.5361% ( 1) 00:18:30.216 7.585 - 7.633: 98.5517% ( 2) 00:18:30.216 7.633 - 7.680: 98.5596% ( 1) 00:18:30.216 7.680 - 7.727: 98.5752% ( 2) 00:18:30.216 7.822 - 7.870: 98.5831% ( 1) 00:18:30.216 7.964 - 8.012: 98.5909% ( 1) 00:18:30.216 8.059 - 8.107: 98.6065% ( 2) 00:18:30.216 8.107 - 8.154: 98.6144% ( 1) 00:18:30.216 8.249 - 8.296: 98.6222% ( 1) 00:18:30.216 8.296 - 8.344: 98.6300% ( 1) 00:18:30.216 8.344 - 8.391: 98.6379% ( 1) 00:18:30.216 8.391 - 8.439: 98.6457% ( 1) 00:18:30.216 8.439 - 8.486: 98.6535% ( 1) 00:18:30.216 8.486 - 8.533: 98.6613% ( 1) 00:18:30.216 8.533 - 8.581: 98.6692% ( 1) 00:18:30.216 8.581 - 8.628: 98.6770% ( 1) 00:18:30.216 8.628 - 8.676: 98.6848% ( 1) 00:18:30.216 8.676 - 8.723: 98.6927% ( 1) 00:18:30.216 8.723 - 8.770: 98.7005% ( 1) 00:18:30.216 8.770 - 8.818: 98.7083% ( 1) 00:18:30.216 8.818 - 8.865: 98.7161% ( 1) 00:18:30.216 8.865 - 8.913: 98.7240% ( 1) 00:18:30.216 8.960 - 9.007: 98.7396% ( 2) 00:18:30.216 9.007 - 9.055: 98.7475% ( 1) 00:18:30.216 9.055 - 9.102: 98.7553% ( 1) 00:18:30.216 9.102 - 9.150: 98.7631% ( 1) 00:18:30.216 9.150 - 9.197: 98.7709% ( 1) 00:18:30.216 9.197 - 9.244: 98.7788% ( 1) 00:18:30.216 9.292 - 9.339: 98.7944% ( 2) 00:18:30.216 9.339 - 9.387: 98.8023% ( 1) 00:18:30.216 9.387 - 9.434: 98.8101% ( 1) 00:18:30.216 9.529 - 9.576: 98.8257% ( 2) 00:18:30.216 9.576 - 9.624: 98.8336% ( 1) 00:18:30.216 9.719 - 9.766: 98.8414% ( 1) 00:18:30.216 9.813 - 9.861: 98.8492% ( 1) 00:18:30.216 10.050 - 10.098: 98.8571% ( 1) 00:18:30.216 10.524 - 10.572: 98.8649% ( 1) 00:18:30.216 10.619 - 10.667: 98.8727% ( 1) 00:18:30.216 10.667 - 10.714: 98.8805% ( 1) 00:18:30.216 10.904 - 10.951: 98.8884% ( 1) 00:18:30.216 10.999 - 11.046: 98.8962% ( 1) 00:18:30.216 11.520 - 11.567: 98.9040% ( 1) 00:18:30.216 11.567 - 11.615: 98.9119% ( 1) 00:18:30.216 11.757 - 11.804: 98.9197% ( 1) 00:18:30.216 11.852 - 11.899: 98.9275% ( 1) 00:18:30.216 11.994 - 12.041: 98.9353% ( 1) 00:18:30.216 12.041 - 12.089: 98.9432% ( 1) 00:18:30.216 12.136 - 12.231: 98.9667% ( 3) 00:18:30.216 12.231 - 12.326: 98.9745% ( 1) 00:18:30.216 12.326 - 12.421: 98.9901% ( 2) 00:18:30.216 12.610 - 12.705: 98.9980% ( 1) 00:18:30.216 12.705 - 12.800: 99.0058% ( 1) 00:18:30.216 12.895 - 12.990: 99.0136% ( 1) 00:18:30.216 13.084 - 13.179: 99.0214% ( 1) 00:18:30.216 13.559 - 13.653: 99.0293% ( 1) 00:18:30.216 13.653 - 13.748: 99.0371% ( 1) 00:18:30.216 13.843 - 13.938: 99.0449% ( 1) 00:18:30.216 14.033 - 14.127: 99.0606% ( 2) 00:18:30.216 14.696 - 14.791: 99.0684% ( 1) 00:18:30.216 14.886 - 14.981: 99.0762% ( 1) 00:18:30.216 15.265 - 15.360: 99.0841% ( 1) 00:18:30.216 15.360 - 15.455: 99.0919% ( 1) 00:18:30.217 17.161 - 17.256: 99.1076% ( 2) 00:18:30.217 17.256 - 17.351: 99.1232% ( 2) 00:18:30.217 17.351 - 17.446: 99.1545% ( 4) 00:18:30.217 17.446 - 17.541: 99.2093% ( 7) 00:18:30.217 17.541 - 17.636: 99.2485% ( 5) 00:18:30.217 17.636 - 17.730: 99.3189% ( 9) 00:18:30.217 17.730 - 17.825: 99.3816% ( 8) 00:18:30.217 17.825 - 17.920: 99.4207% ( 5) 00:18:30.217 17.920 - 18.015: 99.4598% ( 5) 00:18:30.217 18.015 - 18.110: 99.5068% ( 6) 00:18:30.217 18.110 - 18.204: 99.6008% ( 12) 00:18:30.217 18.204 - 18.299: 99.6399% ( 5) 00:18:30.217 18.299 - 18.394: 99.6712% ( 4) 00:18:30.217 18.394 - 18.489: 99.6947% ( 3) 00:18:30.217 18.489 - 18.584: 99.7025% ( 1) 00:18:30.217 18.584 - 18.679: 99.7182% ( 2) 00:18:30.217 18.679 - 18.773: 99.7260% ( 1) 00:18:30.217 18.773 - 18.868: 99.7573% ( 4) 00:18:30.217 18.868 - 18.963: 99.7730% ( 2) 00:18:30.217 18.963 - 19.058: 99.7886% ( 2) 00:18:30.217 19.058 - 19.153: 99.8043% ( 2) 00:18:30.217 19.153 - 19.247: 99.8199% ( 2) 00:18:30.217 19.247 - 19.342: 99.8278% ( 1) 00:18:30.217 19.342 - 19.437: 99.8356% ( 1) 00:18:30.217 19.437 - 19.532: 99.8434% ( 1) 00:18:30.217 20.101 - 20.196: 99.8513% ( 1) 00:18:30.217 22.661 - 22.756: 99.8591% ( 1) 00:18:30.217 24.273 - 24.462: 99.8669% ( 1) 00:18:30.217 27.117 - 27.307: 99.8747% ( 1) 00:18:30.217 3980.705 - 4004.978: 99.9843% ( 14) 00:18:30.217 4004.978 - 4029.250: 100.0000% ( 2) 00:18:30.217 00:18:30.217 Complete histogram 00:18:30.217 ================== 00:18:30.217 Range in us Cumulative Count 00:18:30.217 2.050 - 2.062: 0.1331% ( 17) 00:18:30.217 2.062 - 2.074: 24.9256% ( 3167) 00:18:30.217 2.074 - 2.086: 51.6205% ( 3410) 00:18:30.217 2.086 - 2.098: 53.0296% ( 180) 00:18:30.217 2.098 - 2.110: 56.1688% ( 401) 00:18:30.217 2.110 - 2.121: 58.1102% ( 248) 00:18:30.217 2.121 - 2.133: 60.9911% ( 368) 00:18:30.217 2.133 - 2.145: 75.5989% ( 1866) 00:18:30.217 2.145 - 2.157: 81.7363% ( 784) 00:18:30.217 2.157 - 2.169: 82.7932% ( 135) 00:18:30.217 2.169 - 2.181: 84.6955% ( 243) 00:18:30.217 2.181 - 2.193: 85.8149% ( 143) 00:18:30.217 2.193 - 2.204: 86.7935% ( 125) 00:18:30.217 2.204 - 2.216: 89.7761% ( 381) 00:18:30.217 2.216 - 2.228: 92.5082% ( 349) 00:18:30.217 2.228 - 2.240: 93.6355% ( 144) 00:18:30.217 2.240 - 2.252: 94.1757% ( 69) 00:18:30.217 2.252 - 2.264: 94.4340% ( 33) 00:18:30.217 2.264 - 2.276: 94.6141% ( 23) 00:18:30.217 2.276 - 2.287: 94.8724% ( 33) 00:18:30.217 2.287 - 2.299: 95.3186% ( 57) 00:18:30.217 2.299 - 2.311: 95.6474% ( 42) 00:18:30.217 2.311 - 2.323: 95.7335% ( 11) 00:18:30.217 2.323 - 2.335: 95.7570% ( 3) 00:18:30.217 2.335 - 2.347: 95.7648% ( 1) 00:18:30.217 2.347 - 2.359: 95.8040% ( 5) 00:18:30.217 2.359 - 2.370: 95.9057% ( 13) 00:18:30.217 2.370 - 2.382: 96.0545% ( 19) 00:18:30.217 2.382 - 2.394: 96.2502% ( 25) 00:18:30.217 2.394 - 2.406: 96.4537% ( 26) 00:18:30.217 2.406 - 2.418: 96.6494% ( 25) 00:18:30.217 2.418 - 2.430: 96.9626% ( 40) 00:18:30.217 2.430 - 2.441: 97.3305% ( 47) 00:18:30.217 2.441 - 2.453: 97.5184% ( 24) 00:18:30.217 2.453 - 2.465: 97.6828% ( 21) 00:18:30.217 2.465 - 2.477: 97.9098% ( 29) 00:18:30.217 2.477 - 2.489: 98.0194% ( 14) 00:18:30.217 2.489 - 2.501: 98.0977% ( 10) 00:18:30.217 2.501 - 2.513: 98.1916% ( 12) 00:18:30.217 2.513 - 2.524: 98.2778% ( 11) 00:18:30.217 2.524 - 2.536: 98.2856% ( 1) 00:18:30.217 2.536 - 2.548: 98.3247% ( 5) 00:18:30.217 2.548 - 2.560: 98.3873% ( 8) 00:18:30.217 2.560 - 2.572: 98.4108% ( 3) 00:18:30.217 2.596 - 2.607: 98.4187% ( 1) 00:18:30.217 2.607 - 2.619: 98.4343% ( 2) 00:18:30.217 2.619 - 2.631: 98.4421% ( 1) 00:18:30.217 2.631 - 2.643: 98.4500% ( 1) 00:18:30.217 2.643 - 2.655: 98.4578% ( 1) 00:18:30.217 2.655 - 2.667: 98.4656% ( 1) 00:18:30.217 2.667 - 2.679: 98.4813% ( 2) 00:18:30.217 2.679 - 2.690: 98.4891% ( 1) 00:18:30.217 2.738 - 2.750: 98.4969% ( 1) 00:18:30.217 2.750 - 2.761: 98.5048% ( 1) 00:18:30.217 3.081 - 3.105: 98.5126% ( 1) 00:18:30.217 3.271 - 3.295: 98.5204% ( 1) 00:18:30.217 3.295 - 3.319: 98.5283% ( 1) 00:18:30.217 3.319 - 3.342: 98.5361% ( 1) 00:18:30.217 3.342 - 3.366: 98.5439% ( 1) 00:18:30.217 3.390 - 3.413: 98.5752% ( 4) 00:18:30.217 3.413 - 3.437: 98.5909% ( 2) 00:18:30.217 3.437 - 3.461: 98.6065% ( 2) 00:18:30.217 3.484 - 3.508: 98.6222% ( 2) 00:18:30.217 3.508 - 3.532: 98.6300% ( 1) 00:18:30.217 3.532 - 3.556: 98.6535% ( 3) 00:18:30.217 3.556 - 3.579: 98.6770% ( 3) 00:18:30.217 3.579 - 3.603: 98.6927% ( 2) 00:18:30.217 3.603 - 3.627: 98.7083% ( 2) 00:18:30.217 3.650 - 3.674: 98.7161% ( 1) 00:18:30.217 3.674 - 3.698: 9[2024-10-11 22:41:33.432312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:30.217 8.7240% ( 1) 00:18:30.217 3.745 - 3.769: 98.7318% ( 1) 00:18:30.217 3.864 - 3.887: 98.7475% ( 2) 00:18:30.217 3.959 - 3.982: 98.7631% ( 2) 00:18:30.217 4.219 - 4.243: 98.7709% ( 1) 00:18:30.217 5.784 - 5.807: 98.7788% ( 1) 00:18:30.217 5.926 - 5.950: 98.7866% ( 1) 00:18:30.217 6.163 - 6.210: 98.7944% ( 1) 00:18:30.217 6.305 - 6.353: 98.8101% ( 2) 00:18:30.217 6.590 - 6.637: 98.8179% ( 1) 00:18:30.217 6.637 - 6.684: 98.8257% ( 1) 00:18:30.217 6.779 - 6.827: 98.8336% ( 1) 00:18:30.217 6.827 - 6.874: 98.8414% ( 1) 00:18:30.217 6.874 - 6.921: 98.8492% ( 1) 00:18:30.217 7.253 - 7.301: 98.8649% ( 2) 00:18:30.217 7.775 - 7.822: 98.8805% ( 2) 00:18:30.217 7.917 - 7.964: 98.8884% ( 1) 00:18:30.217 8.391 - 8.439: 98.8962% ( 1) 00:18:30.217 9.102 - 9.150: 98.9040% ( 1) 00:18:30.217 13.369 - 13.464: 98.9119% ( 1) 00:18:30.217 15.644 - 15.739: 98.9353% ( 3) 00:18:30.217 15.739 - 15.834: 98.9510% ( 2) 00:18:30.217 15.834 - 15.929: 98.9667% ( 2) 00:18:30.217 15.929 - 16.024: 99.0136% ( 6) 00:18:30.217 16.024 - 16.119: 99.0371% ( 3) 00:18:30.217 16.119 - 16.213: 99.0684% ( 4) 00:18:30.217 16.213 - 16.308: 99.1154% ( 6) 00:18:30.217 16.308 - 16.403: 99.1310% ( 2) 00:18:30.217 16.403 - 16.498: 99.1937% ( 8) 00:18:30.217 16.498 - 16.593: 99.2485% ( 7) 00:18:30.217 16.593 - 16.687: 99.2954% ( 6) 00:18:30.217 16.687 - 16.782: 99.3111% ( 2) 00:18:30.217 16.782 - 16.877: 99.3268% ( 2) 00:18:30.217 16.877 - 16.972: 99.3581% ( 4) 00:18:30.217 17.067 - 17.161: 99.3894% ( 4) 00:18:30.217 17.256 - 17.351: 99.3972% ( 1) 00:18:30.217 17.446 - 17.541: 99.4129% ( 2) 00:18:30.217 17.730 - 17.825: 99.4207% ( 1) 00:18:30.217 18.394 - 18.489: 99.4285% ( 1) 00:18:30.217 18.489 - 18.584: 99.4364% ( 1) 00:18:30.217 25.410 - 25.600: 99.4442% ( 1) 00:18:30.217 26.548 - 26.738: 99.4520% ( 1) 00:18:30.217 3883.615 - 3907.887: 99.4598% ( 1) 00:18:30.217 3980.705 - 4004.978: 99.9374% ( 61) 00:18:30.217 4004.978 - 4029.250: 100.0000% ( 8) 00:18:30.217 00:18:30.217 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:30.217 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:30.217 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:30.217 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:30.217 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:30.475 [ 00:18:30.475 { 00:18:30.475 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:30.475 "subtype": "Discovery", 00:18:30.475 "listen_addresses": [], 00:18:30.475 "allow_any_host": true, 00:18:30.475 "hosts": [] 00:18:30.475 }, 00:18:30.475 { 00:18:30.475 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:30.475 "subtype": "NVMe", 00:18:30.475 "listen_addresses": [ 00:18:30.475 { 00:18:30.475 "trtype": "VFIOUSER", 00:18:30.475 "adrfam": "IPv4", 00:18:30.475 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:30.475 "trsvcid": "0" 00:18:30.475 } 00:18:30.475 ], 00:18:30.475 "allow_any_host": true, 00:18:30.475 "hosts": [], 00:18:30.475 "serial_number": "SPDK1", 00:18:30.475 "model_number": "SPDK bdev Controller", 00:18:30.475 "max_namespaces": 32, 00:18:30.475 "min_cntlid": 1, 00:18:30.475 "max_cntlid": 65519, 00:18:30.475 "namespaces": [ 00:18:30.475 { 00:18:30.475 "nsid": 1, 00:18:30.475 "bdev_name": "Malloc1", 00:18:30.475 "name": "Malloc1", 00:18:30.475 "nguid": "3B4C522334BE42F087926630CBB5C678", 00:18:30.475 "uuid": "3b4c5223-34be-42f0-8792-6630cbb5c678" 00:18:30.475 } 00:18:30.475 ] 00:18:30.475 }, 00:18:30.476 { 00:18:30.476 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:30.476 "subtype": "NVMe", 00:18:30.476 "listen_addresses": [ 00:18:30.476 { 00:18:30.476 "trtype": "VFIOUSER", 00:18:30.476 "adrfam": "IPv4", 00:18:30.476 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:30.476 "trsvcid": "0" 00:18:30.476 } 00:18:30.476 ], 00:18:30.476 "allow_any_host": true, 00:18:30.476 "hosts": [], 00:18:30.476 "serial_number": "SPDK2", 00:18:30.476 "model_number": "SPDK bdev Controller", 00:18:30.476 "max_namespaces": 32, 00:18:30.476 "min_cntlid": 1, 00:18:30.476 "max_cntlid": 65519, 00:18:30.476 "namespaces": [ 00:18:30.476 { 00:18:30.476 "nsid": 1, 00:18:30.476 "bdev_name": "Malloc2", 00:18:30.476 "name": "Malloc2", 00:18:30.476 "nguid": "CD99579C57BF4F70905ED3C1C202908A", 00:18:30.476 "uuid": "cd99579c-57bf-4f70-905e-d3c1c202908a" 00:18:30.476 } 00:18:30.476 ] 00:18:30.476 } 00:18:30.476 ] 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=230956 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:30.734 [2024-10-11 22:41:33.917100] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:30.734 22:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:30.992 Malloc3 00:18:31.250 22:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:31.508 [2024-10-11 22:41:34.585137] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:31.508 22:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:31.508 Asynchronous Event Request test 00:18:31.508 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:31.508 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:31.508 Registering asynchronous event callbacks... 00:18:31.508 Starting namespace attribute notice tests for all controllers... 00:18:31.508 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:31.508 aer_cb - Changed Namespace 00:18:31.508 Cleaning up... 00:18:31.768 [ 00:18:31.768 { 00:18:31.768 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:31.768 "subtype": "Discovery", 00:18:31.768 "listen_addresses": [], 00:18:31.768 "allow_any_host": true, 00:18:31.768 "hosts": [] 00:18:31.768 }, 00:18:31.768 { 00:18:31.768 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:31.768 "subtype": "NVMe", 00:18:31.768 "listen_addresses": [ 00:18:31.768 { 00:18:31.768 "trtype": "VFIOUSER", 00:18:31.768 "adrfam": "IPv4", 00:18:31.768 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:31.768 "trsvcid": "0" 00:18:31.768 } 00:18:31.768 ], 00:18:31.768 "allow_any_host": true, 00:18:31.768 "hosts": [], 00:18:31.768 "serial_number": "SPDK1", 00:18:31.768 "model_number": "SPDK bdev Controller", 00:18:31.768 "max_namespaces": 32, 00:18:31.768 "min_cntlid": 1, 00:18:31.768 "max_cntlid": 65519, 00:18:31.768 "namespaces": [ 00:18:31.768 { 00:18:31.768 "nsid": 1, 00:18:31.768 "bdev_name": "Malloc1", 00:18:31.768 "name": "Malloc1", 00:18:31.768 "nguid": "3B4C522334BE42F087926630CBB5C678", 00:18:31.768 "uuid": "3b4c5223-34be-42f0-8792-6630cbb5c678" 00:18:31.768 }, 00:18:31.768 { 00:18:31.768 "nsid": 2, 00:18:31.768 "bdev_name": "Malloc3", 00:18:31.768 "name": "Malloc3", 00:18:31.768 "nguid": "51B431CBE2D04819BA1BE68EF12C2178", 00:18:31.768 "uuid": "51b431cb-e2d0-4819-ba1b-e68ef12c2178" 00:18:31.768 } 00:18:31.768 ] 00:18:31.768 }, 00:18:31.768 { 00:18:31.768 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:31.768 "subtype": "NVMe", 00:18:31.768 "listen_addresses": [ 00:18:31.768 { 00:18:31.768 "trtype": "VFIOUSER", 00:18:31.768 "adrfam": "IPv4", 00:18:31.768 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:31.768 "trsvcid": "0" 00:18:31.768 } 00:18:31.768 ], 00:18:31.768 "allow_any_host": true, 00:18:31.768 "hosts": [], 00:18:31.768 "serial_number": "SPDK2", 00:18:31.768 "model_number": "SPDK bdev Controller", 00:18:31.768 "max_namespaces": 32, 00:18:31.768 "min_cntlid": 1, 00:18:31.768 "max_cntlid": 65519, 00:18:31.768 "namespaces": [ 00:18:31.768 { 00:18:31.768 "nsid": 1, 00:18:31.768 "bdev_name": "Malloc2", 00:18:31.768 "name": "Malloc2", 00:18:31.768 "nguid": "CD99579C57BF4F70905ED3C1C202908A", 00:18:31.768 "uuid": "cd99579c-57bf-4f70-905e-d3c1c202908a" 00:18:31.768 } 00:18:31.768 ] 00:18:31.768 } 00:18:31.768 ] 00:18:31.768 22:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 230956 00:18:31.768 22:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:31.768 22:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:31.768 22:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:31.768 22:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:31.768 [2024-10-11 22:41:34.900954] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:18:31.768 [2024-10-11 22:41:34.900990] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231094 ] 00:18:31.768 [2024-10-11 22:41:34.931734] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:31.768 [2024-10-11 22:41:34.944899] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:31.768 [2024-10-11 22:41:34.944930] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8765192000 00:18:31.768 [2024-10-11 22:41:34.945908] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:31.768 [2024-10-11 22:41:34.946918] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:31.768 [2024-10-11 22:41:34.947924] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:31.768 [2024-10-11 22:41:34.948930] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:31.768 [2024-10-11 22:41:34.949935] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:31.768 [2024-10-11 22:41:34.950935] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:31.768 [2024-10-11 22:41:34.951945] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:31.768 [2024-10-11 22:41:34.952947] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:31.768 [2024-10-11 22:41:34.953958] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:31.768 [2024-10-11 22:41:34.953980] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8763e8a000 00:18:31.768 [2024-10-11 22:41:34.955132] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:31.768 [2024-10-11 22:41:34.970065] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:31.768 [2024-10-11 22:41:34.970100] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:31.768 [2024-10-11 22:41:34.972190] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:31.768 [2024-10-11 22:41:34.972239] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:31.768 [2024-10-11 22:41:34.972325] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:31.768 [2024-10-11 22:41:34.972352] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:31.768 [2024-10-11 22:41:34.972362] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:31.768 [2024-10-11 22:41:34.973198] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:31.768 [2024-10-11 22:41:34.973219] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:31.768 [2024-10-11 22:41:34.973232] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:31.768 [2024-10-11 22:41:34.974197] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:31.768 [2024-10-11 22:41:34.974217] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:31.768 [2024-10-11 22:41:34.974230] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:31.768 [2024-10-11 22:41:34.975208] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:31.768 [2024-10-11 22:41:34.975228] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:31.768 [2024-10-11 22:41:34.976219] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:31.768 [2024-10-11 22:41:34.976238] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:31.768 [2024-10-11 22:41:34.976247] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:31.768 [2024-10-11 22:41:34.976258] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:31.768 [2024-10-11 22:41:34.976368] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:31.768 [2024-10-11 22:41:34.976380] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:31.768 [2024-10-11 22:41:34.976389] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:31.768 [2024-10-11 22:41:34.977227] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:31.768 [2024-10-11 22:41:34.978235] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:31.768 [2024-10-11 22:41:34.979244] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:31.768 [2024-10-11 22:41:34.980242] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:31.768 [2024-10-11 22:41:34.980320] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:31.768 [2024-10-11 22:41:34.981256] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:31.768 [2024-10-11 22:41:34.981276] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:31.768 [2024-10-11 22:41:34.981285] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:31.768 [2024-10-11 22:41:34.981308] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:31.769 [2024-10-11 22:41:34.981321] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:34.981345] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:31.769 [2024-10-11 22:41:34.981355] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:31.769 [2024-10-11 22:41:34.981361] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:31.769 [2024-10-11 22:41:34.981378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:31.769 [2024-10-11 22:41:34.987566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:31.769 [2024-10-11 22:41:34.987589] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:31.769 [2024-10-11 22:41:34.987599] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:31.769 [2024-10-11 22:41:34.987606] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:31.769 [2024-10-11 22:41:34.987614] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:31.769 [2024-10-11 22:41:34.987622] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:31.769 [2024-10-11 22:41:34.987630] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:31.769 [2024-10-11 22:41:34.987638] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:34.987650] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:34.987670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:31.769 [2024-10-11 22:41:34.995564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:31.769 [2024-10-11 22:41:34.995587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.769 [2024-10-11 22:41:34.995601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.769 [2024-10-11 22:41:34.995613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.769 [2024-10-11 22:41:34.995625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.769 [2024-10-11 22:41:34.995634] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:34.995650] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:34.995666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:31.769 [2024-10-11 22:41:35.003582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:31.769 [2024-10-11 22:41:35.003600] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:31.769 [2024-10-11 22:41:35.003610] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:35.003621] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:35.003635] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:35.003651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:31.769 [2024-10-11 22:41:35.011563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:31.769 [2024-10-11 22:41:35.011640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:35.011658] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:35.011671] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:31.769 [2024-10-11 22:41:35.011680] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:31.769 [2024-10-11 22:41:35.011686] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:31.769 [2024-10-11 22:41:35.011696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:31.769 [2024-10-11 22:41:35.022561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:31.769 [2024-10-11 22:41:35.022586] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:31.769 [2024-10-11 22:41:35.022607] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:35.022622] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:35.022640] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:31.769 [2024-10-11 22:41:35.022649] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:31.769 [2024-10-11 22:41:35.022655] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:31.769 [2024-10-11 22:41:35.022665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:31.769 [2024-10-11 22:41:35.030580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:31.769 [2024-10-11 22:41:35.030608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:35.030640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:31.769 [2024-10-11 22:41:35.030655] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:31.769 [2024-10-11 22:41:35.030664] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:31.769 [2024-10-11 22:41:35.030671] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:31.769 [2024-10-11 22:41:35.030681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:32.028 [2024-10-11 22:41:35.038563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:32.028 [2024-10-11 22:41:35.038584] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:32.028 [2024-10-11 22:41:35.038596] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:32.028 [2024-10-11 22:41:35.038613] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:32.028 [2024-10-11 22:41:35.038623] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:32.028 [2024-10-11 22:41:35.038632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:32.028 [2024-10-11 22:41:35.038640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:32.028 [2024-10-11 22:41:35.038648] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:32.028 [2024-10-11 22:41:35.038655] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:32.028 [2024-10-11 22:41:35.038663] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:32.028 [2024-10-11 22:41:35.038688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:32.028 [2024-10-11 22:41:35.046559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:32.028 [2024-10-11 22:41:35.046586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:32.028 [2024-10-11 22:41:35.054561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:32.028 [2024-10-11 22:41:35.054591] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:32.028 [2024-10-11 22:41:35.062563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:32.028 [2024-10-11 22:41:35.062588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:32.028 [2024-10-11 22:41:35.070560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:32.028 [2024-10-11 22:41:35.070593] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:32.028 [2024-10-11 22:41:35.070604] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:32.028 [2024-10-11 22:41:35.070610] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:32.028 [2024-10-11 22:41:35.070616] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:32.028 [2024-10-11 22:41:35.070622] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:32.028 [2024-10-11 22:41:35.070632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:32.028 [2024-10-11 22:41:35.070643] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:32.028 [2024-10-11 22:41:35.070652] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:32.028 [2024-10-11 22:41:35.070658] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:32.028 [2024-10-11 22:41:35.070666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:32.028 [2024-10-11 22:41:35.070677] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:32.028 [2024-10-11 22:41:35.070685] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:32.028 [2024-10-11 22:41:35.070691] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:32.028 [2024-10-11 22:41:35.070700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:32.028 [2024-10-11 22:41:35.070711] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:32.028 [2024-10-11 22:41:35.070719] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:32.028 [2024-10-11 22:41:35.070725] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:32.029 [2024-10-11 22:41:35.070734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:32.029 [2024-10-11 22:41:35.078577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:32.029 [2024-10-11 22:41:35.078606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:32.029 [2024-10-11 22:41:35.078623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:32.029 [2024-10-11 22:41:35.078636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:32.029 ===================================================== 00:18:32.029 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:32.029 ===================================================== 00:18:32.029 Controller Capabilities/Features 00:18:32.029 ================================ 00:18:32.029 Vendor ID: 4e58 00:18:32.029 Subsystem Vendor ID: 4e58 00:18:32.029 Serial Number: SPDK2 00:18:32.029 Model Number: SPDK bdev Controller 00:18:32.029 Firmware Version: 25.01 00:18:32.029 Recommended Arb Burst: 6 00:18:32.029 IEEE OUI Identifier: 8d 6b 50 00:18:32.029 Multi-path I/O 00:18:32.029 May have multiple subsystem ports: Yes 00:18:32.029 May have multiple controllers: Yes 00:18:32.029 Associated with SR-IOV VF: No 00:18:32.029 Max Data Transfer Size: 131072 00:18:32.029 Max Number of Namespaces: 32 00:18:32.029 Max Number of I/O Queues: 127 00:18:32.029 NVMe Specification Version (VS): 1.3 00:18:32.029 NVMe Specification Version (Identify): 1.3 00:18:32.029 Maximum Queue Entries: 256 00:18:32.029 Contiguous Queues Required: Yes 00:18:32.029 Arbitration Mechanisms Supported 00:18:32.029 Weighted Round Robin: Not Supported 00:18:32.029 Vendor Specific: Not Supported 00:18:32.029 Reset Timeout: 15000 ms 00:18:32.029 Doorbell Stride: 4 bytes 00:18:32.029 NVM Subsystem Reset: Not Supported 00:18:32.029 Command Sets Supported 00:18:32.029 NVM Command Set: Supported 00:18:32.029 Boot Partition: Not Supported 00:18:32.029 Memory Page Size Minimum: 4096 bytes 00:18:32.029 Memory Page Size Maximum: 4096 bytes 00:18:32.029 Persistent Memory Region: Not Supported 00:18:32.029 Optional Asynchronous Events Supported 00:18:32.029 Namespace Attribute Notices: Supported 00:18:32.029 Firmware Activation Notices: Not Supported 00:18:32.029 ANA Change Notices: Not Supported 00:18:32.029 PLE Aggregate Log Change Notices: Not Supported 00:18:32.029 LBA Status Info Alert Notices: Not Supported 00:18:32.029 EGE Aggregate Log Change Notices: Not Supported 00:18:32.029 Normal NVM Subsystem Shutdown event: Not Supported 00:18:32.029 Zone Descriptor Change Notices: Not Supported 00:18:32.029 Discovery Log Change Notices: Not Supported 00:18:32.029 Controller Attributes 00:18:32.029 128-bit Host Identifier: Supported 00:18:32.029 Non-Operational Permissive Mode: Not Supported 00:18:32.029 NVM Sets: Not Supported 00:18:32.029 Read Recovery Levels: Not Supported 00:18:32.029 Endurance Groups: Not Supported 00:18:32.029 Predictable Latency Mode: Not Supported 00:18:32.029 Traffic Based Keep ALive: Not Supported 00:18:32.029 Namespace Granularity: Not Supported 00:18:32.029 SQ Associations: Not Supported 00:18:32.029 UUID List: Not Supported 00:18:32.029 Multi-Domain Subsystem: Not Supported 00:18:32.029 Fixed Capacity Management: Not Supported 00:18:32.029 Variable Capacity Management: Not Supported 00:18:32.029 Delete Endurance Group: Not Supported 00:18:32.029 Delete NVM Set: Not Supported 00:18:32.029 Extended LBA Formats Supported: Not Supported 00:18:32.029 Flexible Data Placement Supported: Not Supported 00:18:32.029 00:18:32.029 Controller Memory Buffer Support 00:18:32.029 ================================ 00:18:32.029 Supported: No 00:18:32.029 00:18:32.029 Persistent Memory Region Support 00:18:32.029 ================================ 00:18:32.029 Supported: No 00:18:32.029 00:18:32.029 Admin Command Set Attributes 00:18:32.029 ============================ 00:18:32.029 Security Send/Receive: Not Supported 00:18:32.029 Format NVM: Not Supported 00:18:32.029 Firmware Activate/Download: Not Supported 00:18:32.029 Namespace Management: Not Supported 00:18:32.029 Device Self-Test: Not Supported 00:18:32.029 Directives: Not Supported 00:18:32.029 NVMe-MI: Not Supported 00:18:32.029 Virtualization Management: Not Supported 00:18:32.029 Doorbell Buffer Config: Not Supported 00:18:32.029 Get LBA Status Capability: Not Supported 00:18:32.029 Command & Feature Lockdown Capability: Not Supported 00:18:32.029 Abort Command Limit: 4 00:18:32.029 Async Event Request Limit: 4 00:18:32.029 Number of Firmware Slots: N/A 00:18:32.029 Firmware Slot 1 Read-Only: N/A 00:18:32.029 Firmware Activation Without Reset: N/A 00:18:32.029 Multiple Update Detection Support: N/A 00:18:32.029 Firmware Update Granularity: No Information Provided 00:18:32.029 Per-Namespace SMART Log: No 00:18:32.029 Asymmetric Namespace Access Log Page: Not Supported 00:18:32.029 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:32.029 Command Effects Log Page: Supported 00:18:32.029 Get Log Page Extended Data: Supported 00:18:32.029 Telemetry Log Pages: Not Supported 00:18:32.029 Persistent Event Log Pages: Not Supported 00:18:32.029 Supported Log Pages Log Page: May Support 00:18:32.029 Commands Supported & Effects Log Page: Not Supported 00:18:32.029 Feature Identifiers & Effects Log Page:May Support 00:18:32.029 NVMe-MI Commands & Effects Log Page: May Support 00:18:32.029 Data Area 4 for Telemetry Log: Not Supported 00:18:32.029 Error Log Page Entries Supported: 128 00:18:32.029 Keep Alive: Supported 00:18:32.029 Keep Alive Granularity: 10000 ms 00:18:32.029 00:18:32.029 NVM Command Set Attributes 00:18:32.029 ========================== 00:18:32.029 Submission Queue Entry Size 00:18:32.029 Max: 64 00:18:32.029 Min: 64 00:18:32.029 Completion Queue Entry Size 00:18:32.029 Max: 16 00:18:32.029 Min: 16 00:18:32.029 Number of Namespaces: 32 00:18:32.029 Compare Command: Supported 00:18:32.029 Write Uncorrectable Command: Not Supported 00:18:32.029 Dataset Management Command: Supported 00:18:32.029 Write Zeroes Command: Supported 00:18:32.029 Set Features Save Field: Not Supported 00:18:32.029 Reservations: Not Supported 00:18:32.029 Timestamp: Not Supported 00:18:32.029 Copy: Supported 00:18:32.029 Volatile Write Cache: Present 00:18:32.029 Atomic Write Unit (Normal): 1 00:18:32.029 Atomic Write Unit (PFail): 1 00:18:32.029 Atomic Compare & Write Unit: 1 00:18:32.029 Fused Compare & Write: Supported 00:18:32.029 Scatter-Gather List 00:18:32.029 SGL Command Set: Supported (Dword aligned) 00:18:32.029 SGL Keyed: Not Supported 00:18:32.029 SGL Bit Bucket Descriptor: Not Supported 00:18:32.029 SGL Metadata Pointer: Not Supported 00:18:32.029 Oversized SGL: Not Supported 00:18:32.029 SGL Metadata Address: Not Supported 00:18:32.029 SGL Offset: Not Supported 00:18:32.029 Transport SGL Data Block: Not Supported 00:18:32.029 Replay Protected Memory Block: Not Supported 00:18:32.029 00:18:32.029 Firmware Slot Information 00:18:32.029 ========================= 00:18:32.029 Active slot: 1 00:18:32.029 Slot 1 Firmware Revision: 25.01 00:18:32.029 00:18:32.029 00:18:32.029 Commands Supported and Effects 00:18:32.029 ============================== 00:18:32.029 Admin Commands 00:18:32.029 -------------- 00:18:32.029 Get Log Page (02h): Supported 00:18:32.029 Identify (06h): Supported 00:18:32.029 Abort (08h): Supported 00:18:32.029 Set Features (09h): Supported 00:18:32.029 Get Features (0Ah): Supported 00:18:32.029 Asynchronous Event Request (0Ch): Supported 00:18:32.029 Keep Alive (18h): Supported 00:18:32.029 I/O Commands 00:18:32.029 ------------ 00:18:32.029 Flush (00h): Supported LBA-Change 00:18:32.029 Write (01h): Supported LBA-Change 00:18:32.029 Read (02h): Supported 00:18:32.029 Compare (05h): Supported 00:18:32.029 Write Zeroes (08h): Supported LBA-Change 00:18:32.029 Dataset Management (09h): Supported LBA-Change 00:18:32.029 Copy (19h): Supported LBA-Change 00:18:32.029 00:18:32.029 Error Log 00:18:32.029 ========= 00:18:32.029 00:18:32.029 Arbitration 00:18:32.029 =========== 00:18:32.029 Arbitration Burst: 1 00:18:32.029 00:18:32.029 Power Management 00:18:32.029 ================ 00:18:32.029 Number of Power States: 1 00:18:32.029 Current Power State: Power State #0 00:18:32.030 Power State #0: 00:18:32.030 Max Power: 0.00 W 00:18:32.030 Non-Operational State: Operational 00:18:32.030 Entry Latency: Not Reported 00:18:32.030 Exit Latency: Not Reported 00:18:32.030 Relative Read Throughput: 0 00:18:32.030 Relative Read Latency: 0 00:18:32.030 Relative Write Throughput: 0 00:18:32.030 Relative Write Latency: 0 00:18:32.030 Idle Power: Not Reported 00:18:32.030 Active Power: Not Reported 00:18:32.030 Non-Operational Permissive Mode: Not Supported 00:18:32.030 00:18:32.030 Health Information 00:18:32.030 ================== 00:18:32.030 Critical Warnings: 00:18:32.030 Available Spare Space: OK 00:18:32.030 Temperature: OK 00:18:32.030 Device Reliability: OK 00:18:32.030 Read Only: No 00:18:32.030 Volatile Memory Backup: OK 00:18:32.030 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:32.030 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:32.030 Available Spare: 0% 00:18:32.030 Available Sp[2024-10-11 22:41:35.078754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:32.030 [2024-10-11 22:41:35.086563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:32.030 [2024-10-11 22:41:35.086619] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:32.030 [2024-10-11 22:41:35.086640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.030 [2024-10-11 22:41:35.086652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.030 [2024-10-11 22:41:35.086661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.030 [2024-10-11 22:41:35.086671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.030 [2024-10-11 22:41:35.086737] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:32.030 [2024-10-11 22:41:35.086757] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:32.030 [2024-10-11 22:41:35.087743] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:32.030 [2024-10-11 22:41:35.087820] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:32.030 [2024-10-11 22:41:35.087850] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:32.030 [2024-10-11 22:41:35.088753] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:32.030 [2024-10-11 22:41:35.088777] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:32.030 [2024-10-11 22:41:35.088848] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:32.030 [2024-10-11 22:41:35.090023] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:32.030 are Threshold: 0% 00:18:32.030 Life Percentage Used: 0% 00:18:32.030 Data Units Read: 0 00:18:32.030 Data Units Written: 0 00:18:32.030 Host Read Commands: 0 00:18:32.030 Host Write Commands: 0 00:18:32.030 Controller Busy Time: 0 minutes 00:18:32.030 Power Cycles: 0 00:18:32.030 Power On Hours: 0 hours 00:18:32.030 Unsafe Shutdowns: 0 00:18:32.030 Unrecoverable Media Errors: 0 00:18:32.030 Lifetime Error Log Entries: 0 00:18:32.030 Warning Temperature Time: 0 minutes 00:18:32.030 Critical Temperature Time: 0 minutes 00:18:32.030 00:18:32.030 Number of Queues 00:18:32.030 ================ 00:18:32.030 Number of I/O Submission Queues: 127 00:18:32.030 Number of I/O Completion Queues: 127 00:18:32.030 00:18:32.030 Active Namespaces 00:18:32.030 ================= 00:18:32.030 Namespace ID:1 00:18:32.030 Error Recovery Timeout: Unlimited 00:18:32.030 Command Set Identifier: NVM (00h) 00:18:32.030 Deallocate: Supported 00:18:32.030 Deallocated/Unwritten Error: Not Supported 00:18:32.030 Deallocated Read Value: Unknown 00:18:32.030 Deallocate in Write Zeroes: Not Supported 00:18:32.030 Deallocated Guard Field: 0xFFFF 00:18:32.030 Flush: Supported 00:18:32.030 Reservation: Supported 00:18:32.030 Namespace Sharing Capabilities: Multiple Controllers 00:18:32.030 Size (in LBAs): 131072 (0GiB) 00:18:32.030 Capacity (in LBAs): 131072 (0GiB) 00:18:32.030 Utilization (in LBAs): 131072 (0GiB) 00:18:32.030 NGUID: CD99579C57BF4F70905ED3C1C202908A 00:18:32.030 UUID: cd99579c-57bf-4f70-905e-d3c1c202908a 00:18:32.030 Thin Provisioning: Not Supported 00:18:32.030 Per-NS Atomic Units: Yes 00:18:32.030 Atomic Boundary Size (Normal): 0 00:18:32.030 Atomic Boundary Size (PFail): 0 00:18:32.030 Atomic Boundary Offset: 0 00:18:32.030 Maximum Single Source Range Length: 65535 00:18:32.030 Maximum Copy Length: 65535 00:18:32.030 Maximum Source Range Count: 1 00:18:32.030 NGUID/EUI64 Never Reused: No 00:18:32.030 Namespace Write Protected: No 00:18:32.030 Number of LBA Formats: 1 00:18:32.030 Current LBA Format: LBA Format #00 00:18:32.030 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:32.030 00:18:32.030 22:41:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:32.288 [2024-10-11 22:41:35.309990] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:37.562 Initializing NVMe Controllers 00:18:37.562 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:37.562 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:37.563 Initialization complete. Launching workers. 00:18:37.563 ======================================================== 00:18:37.563 Latency(us) 00:18:37.563 Device Information : IOPS MiB/s Average min max 00:18:37.563 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32598.16 127.34 3925.73 1176.67 8275.49 00:18:37.563 ======================================================== 00:18:37.563 Total : 32598.16 127.34 3925.73 1176.67 8275.49 00:18:37.563 00:18:37.563 [2024-10-11 22:41:40.415941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:37.563 22:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:37.563 [2024-10-11 22:41:40.662609] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:42.831 Initializing NVMe Controllers 00:18:42.831 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:42.831 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:42.831 Initialization complete. Launching workers. 00:18:42.831 ======================================================== 00:18:42.831 Latency(us) 00:18:42.831 Device Information : IOPS MiB/s Average min max 00:18:42.831 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30907.39 120.73 4140.99 1226.64 8241.06 00:18:42.831 ======================================================== 00:18:42.831 Total : 30907.39 120.73 4140.99 1226.64 8241.06 00:18:42.831 00:18:42.831 [2024-10-11 22:41:45.683800] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:42.831 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:42.831 [2024-10-11 22:41:45.886663] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:48.099 [2024-10-11 22:41:51.019704] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:48.099 Initializing NVMe Controllers 00:18:48.099 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:48.099 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:48.099 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:48.099 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:48.099 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:48.099 Initialization complete. Launching workers. 00:18:48.099 Starting thread on core 2 00:18:48.099 Starting thread on core 3 00:18:48.099 Starting thread on core 1 00:18:48.099 22:41:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:48.099 [2024-10-11 22:41:51.335123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:52.288 [2024-10-11 22:41:54.935851] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:52.288 Initializing NVMe Controllers 00:18:52.288 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.288 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.288 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:52.288 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:52.288 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:52.288 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:52.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:52.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:52.288 Initialization complete. Launching workers. 00:18:52.288 Starting thread on core 1 with urgent priority queue 00:18:52.288 Starting thread on core 2 with urgent priority queue 00:18:52.288 Starting thread on core 3 with urgent priority queue 00:18:52.288 Starting thread on core 0 with urgent priority queue 00:18:52.288 SPDK bdev Controller (SPDK2 ) core 0: 6408.67 IO/s 15.60 secs/100000 ios 00:18:52.288 SPDK bdev Controller (SPDK2 ) core 1: 4852.33 IO/s 20.61 secs/100000 ios 00:18:52.288 SPDK bdev Controller (SPDK2 ) core 2: 5555.33 IO/s 18.00 secs/100000 ios 00:18:52.288 SPDK bdev Controller (SPDK2 ) core 3: 6144.00 IO/s 16.28 secs/100000 ios 00:18:52.288 ======================================================== 00:18:52.288 00:18:52.288 22:41:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:52.288 [2024-10-11 22:41:55.226864] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:52.288 Initializing NVMe Controllers 00:18:52.288 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.288 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.288 Namespace ID: 1 size: 0GB 00:18:52.288 Initialization complete. 00:18:52.288 INFO: using host memory buffer for IO 00:18:52.288 Hello world! 00:18:52.288 [2024-10-11 22:41:55.237949] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:52.288 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:52.288 [2024-10-11 22:41:55.520923] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:53.662 Initializing NVMe Controllers 00:18:53.662 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:53.662 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:53.662 Initialization complete. Launching workers. 00:18:53.662 submit (in ns) avg, min, max = 7439.6, 3516.7, 4016810.0 00:18:53.662 complete (in ns) avg, min, max = 28435.2, 2056.7, 4031653.3 00:18:53.662 00:18:53.662 Submit histogram 00:18:53.662 ================ 00:18:53.662 Range in us Cumulative Count 00:18:53.662 3.508 - 3.532: 0.0469% ( 6) 00:18:53.662 3.532 - 3.556: 0.3598% ( 40) 00:18:53.662 3.556 - 3.579: 1.2357% ( 112) 00:18:53.662 3.579 - 3.603: 3.1441% ( 244) 00:18:53.662 3.603 - 3.627: 6.7574% ( 462) 00:18:53.662 3.627 - 3.650: 12.7561% ( 767) 00:18:53.662 3.650 - 3.674: 20.4677% ( 986) 00:18:53.662 3.674 - 3.698: 29.0630% ( 1099) 00:18:53.662 3.698 - 3.721: 37.5723% ( 1088) 00:18:53.662 3.721 - 3.745: 44.7834% ( 922) 00:18:53.662 3.745 - 3.769: 50.1251% ( 683) 00:18:53.662 3.769 - 3.793: 54.6457% ( 578) 00:18:53.662 3.793 - 3.816: 58.3294% ( 471) 00:18:53.662 3.816 - 3.840: 61.8567% ( 451) 00:18:53.662 3.840 - 3.864: 65.6421% ( 484) 00:18:53.662 3.864 - 3.887: 69.6778% ( 516) 00:18:53.662 3.887 - 3.911: 73.8229% ( 530) 00:18:53.662 3.911 - 3.935: 78.1245% ( 550) 00:18:53.662 3.935 - 3.959: 81.5580% ( 439) 00:18:53.662 3.959 - 3.982: 84.5534% ( 383) 00:18:53.662 3.982 - 4.006: 86.7981% ( 287) 00:18:53.662 4.006 - 4.030: 88.5265% ( 221) 00:18:53.662 4.030 - 4.053: 89.6997% ( 150) 00:18:53.662 4.053 - 4.077: 90.8415% ( 146) 00:18:53.662 4.077 - 4.101: 91.8426% ( 128) 00:18:53.662 4.101 - 4.124: 92.7421% ( 115) 00:18:53.662 4.124 - 4.148: 93.5711% ( 106) 00:18:53.662 4.148 - 4.172: 94.4549% ( 113) 00:18:53.662 4.172 - 4.196: 95.0727% ( 79) 00:18:53.662 4.196 - 4.219: 95.5967% ( 67) 00:18:53.662 4.219 - 4.243: 95.9409% ( 44) 00:18:53.662 4.243 - 4.267: 96.0660% ( 16) 00:18:53.662 4.267 - 4.290: 96.2694% ( 26) 00:18:53.662 4.290 - 4.314: 96.5353% ( 34) 00:18:53.662 4.314 - 4.338: 96.6369% ( 13) 00:18:53.662 4.338 - 4.361: 96.7699% ( 17) 00:18:53.662 4.361 - 4.385: 96.8559% ( 11) 00:18:53.662 4.385 - 4.409: 96.9498% ( 12) 00:18:53.662 4.409 - 4.433: 97.0436% ( 12) 00:18:53.662 4.433 - 4.456: 97.0827% ( 5) 00:18:53.662 4.456 - 4.480: 97.1062% ( 3) 00:18:53.662 4.480 - 4.504: 97.1297% ( 3) 00:18:53.662 4.504 - 4.527: 97.1610% ( 4) 00:18:53.662 4.527 - 4.551: 97.1922% ( 4) 00:18:53.662 4.551 - 4.575: 97.2157% ( 3) 00:18:53.662 4.575 - 4.599: 97.2392% ( 3) 00:18:53.662 4.622 - 4.646: 97.2548% ( 2) 00:18:53.662 4.646 - 4.670: 97.2626% ( 1) 00:18:53.662 4.670 - 4.693: 97.2783% ( 2) 00:18:53.662 4.741 - 4.764: 97.2939% ( 2) 00:18:53.662 4.764 - 4.788: 97.3096% ( 2) 00:18:53.662 4.812 - 4.836: 97.3252% ( 2) 00:18:53.662 4.836 - 4.859: 97.3643% ( 5) 00:18:53.662 4.859 - 4.883: 97.4347% ( 9) 00:18:53.662 4.883 - 4.907: 97.4582% ( 3) 00:18:53.662 4.907 - 4.930: 97.4894% ( 4) 00:18:53.662 4.930 - 4.954: 97.5520% ( 8) 00:18:53.662 4.954 - 4.978: 97.5989% ( 6) 00:18:53.662 4.978 - 5.001: 97.6771% ( 10) 00:18:53.662 5.001 - 5.025: 97.7163% ( 5) 00:18:53.662 5.025 - 5.049: 97.7945% ( 10) 00:18:53.662 5.049 - 5.073: 97.8492% ( 7) 00:18:53.662 5.073 - 5.096: 97.8805% ( 4) 00:18:53.662 5.096 - 5.120: 97.8883% ( 1) 00:18:53.662 5.120 - 5.144: 97.9274% ( 5) 00:18:53.662 5.144 - 5.167: 97.9587% ( 4) 00:18:53.662 5.167 - 5.191: 98.0056% ( 6) 00:18:53.662 5.191 - 5.215: 98.0291% ( 3) 00:18:53.662 5.215 - 5.239: 98.0526% ( 3) 00:18:53.662 5.239 - 5.262: 98.0682% ( 2) 00:18:53.662 5.262 - 5.286: 98.0760% ( 1) 00:18:53.662 5.286 - 5.310: 98.0917% ( 2) 00:18:53.662 5.310 - 5.333: 98.1151% ( 3) 00:18:53.662 5.357 - 5.381: 98.1229% ( 1) 00:18:53.662 5.381 - 5.404: 98.1308% ( 1) 00:18:53.662 5.404 - 5.428: 98.1386% ( 1) 00:18:53.662 5.428 - 5.452: 98.1464% ( 1) 00:18:53.662 5.452 - 5.476: 98.1542% ( 1) 00:18:53.662 5.499 - 5.523: 98.1621% ( 1) 00:18:53.662 5.523 - 5.547: 98.1933% ( 4) 00:18:53.662 5.570 - 5.594: 98.2012% ( 1) 00:18:53.662 5.618 - 5.641: 98.2090% ( 1) 00:18:53.662 5.689 - 5.713: 98.2168% ( 1) 00:18:53.662 5.760 - 5.784: 98.2246% ( 1) 00:18:53.662 5.926 - 5.950: 98.2324% ( 1) 00:18:53.662 5.973 - 5.997: 98.2403% ( 1) 00:18:53.662 5.997 - 6.021: 98.2481% ( 1) 00:18:53.662 6.044 - 6.068: 98.2559% ( 1) 00:18:53.662 6.068 - 6.116: 98.2637% ( 1) 00:18:53.662 6.163 - 6.210: 98.2715% ( 1) 00:18:53.662 6.400 - 6.447: 98.2794% ( 1) 00:18:53.662 6.495 - 6.542: 98.2872% ( 1) 00:18:53.662 6.732 - 6.779: 98.2950% ( 1) 00:18:53.662 7.111 - 7.159: 98.3028% ( 1) 00:18:53.662 7.253 - 7.301: 98.3341% ( 4) 00:18:53.662 7.348 - 7.396: 98.3419% ( 1) 00:18:53.662 7.396 - 7.443: 98.3498% ( 1) 00:18:53.662 7.490 - 7.538: 98.3576% ( 1) 00:18:53.662 7.538 - 7.585: 98.3810% ( 3) 00:18:53.662 7.633 - 7.680: 98.3967% ( 2) 00:18:53.662 7.680 - 7.727: 98.4045% ( 1) 00:18:53.662 7.727 - 7.775: 98.4201% ( 2) 00:18:53.662 7.917 - 7.964: 98.4280% ( 1) 00:18:53.662 8.012 - 8.059: 98.4358% ( 1) 00:18:53.662 8.107 - 8.154: 98.4514% ( 2) 00:18:53.662 8.154 - 8.201: 98.4593% ( 1) 00:18:53.662 8.201 - 8.249: 98.4749% ( 2) 00:18:53.662 8.249 - 8.296: 98.4905% ( 2) 00:18:53.662 8.296 - 8.344: 98.4984% ( 1) 00:18:53.662 8.391 - 8.439: 98.5062% ( 1) 00:18:53.662 8.486 - 8.533: 98.5218% ( 2) 00:18:53.662 8.533 - 8.581: 98.5296% ( 1) 00:18:53.662 8.581 - 8.628: 98.5375% ( 1) 00:18:53.662 8.628 - 8.676: 98.5609% ( 3) 00:18:53.662 8.676 - 8.723: 98.5766% ( 2) 00:18:53.662 8.723 - 8.770: 98.5844% ( 1) 00:18:53.662 8.770 - 8.818: 98.6000% ( 2) 00:18:53.662 8.913 - 8.960: 98.6157% ( 2) 00:18:53.662 9.102 - 9.150: 98.6235% ( 1) 00:18:53.662 9.150 - 9.197: 98.6391% ( 2) 00:18:53.662 9.387 - 9.434: 98.6470% ( 1) 00:18:53.662 9.529 - 9.576: 98.6548% ( 1) 00:18:53.662 9.576 - 9.624: 98.6626% ( 1) 00:18:53.662 9.719 - 9.766: 98.6861% ( 3) 00:18:53.662 9.766 - 9.813: 98.6939% ( 1) 00:18:53.662 9.908 - 9.956: 98.7017% ( 1) 00:18:53.662 10.050 - 10.098: 98.7173% ( 2) 00:18:53.662 10.098 - 10.145: 98.7252% ( 1) 00:18:53.662 10.240 - 10.287: 98.7330% ( 1) 00:18:53.662 10.287 - 10.335: 98.7408% ( 1) 00:18:53.662 10.382 - 10.430: 98.7486% ( 1) 00:18:53.662 10.430 - 10.477: 98.7565% ( 1) 00:18:53.662 10.572 - 10.619: 98.7643% ( 1) 00:18:53.662 10.667 - 10.714: 98.7799% ( 2) 00:18:53.662 10.714 - 10.761: 98.7877% ( 1) 00:18:53.662 10.761 - 10.809: 98.7956% ( 1) 00:18:53.662 10.904 - 10.951: 98.8034% ( 1) 00:18:53.662 10.999 - 11.046: 98.8112% ( 1) 00:18:53.662 11.188 - 11.236: 98.8190% ( 1) 00:18:53.662 11.378 - 11.425: 98.8268% ( 1) 00:18:53.662 11.425 - 11.473: 98.8347% ( 1) 00:18:53.662 11.662 - 11.710: 98.8503% ( 2) 00:18:53.662 11.804 - 11.852: 98.8581% ( 1) 00:18:53.662 11.852 - 11.899: 98.8659% ( 1) 00:18:53.662 11.947 - 11.994: 98.8738% ( 1) 00:18:53.662 12.136 - 12.231: 98.8894% ( 2) 00:18:53.662 12.990 - 13.084: 98.8972% ( 1) 00:18:53.662 13.179 - 13.274: 98.9051% ( 1) 00:18:53.662 13.369 - 13.464: 98.9129% ( 1) 00:18:53.662 13.464 - 13.559: 98.9285% ( 2) 00:18:53.663 13.559 - 13.653: 98.9363% ( 1) 00:18:53.663 13.653 - 13.748: 98.9442% ( 1) 00:18:53.663 13.843 - 13.938: 98.9598% ( 2) 00:18:53.663 14.317 - 14.412: 98.9676% ( 1) 00:18:53.663 15.076 - 15.170: 98.9754% ( 1) 00:18:53.663 15.265 - 15.360: 98.9833% ( 1) 00:18:53.663 15.455 - 15.550: 98.9911% ( 1) 00:18:53.663 16.782 - 16.877: 98.9989% ( 1) 00:18:53.663 16.972 - 17.067: 99.0067% ( 1) 00:18:53.663 17.161 - 17.256: 99.0145% ( 1) 00:18:53.663 17.256 - 17.351: 99.0224% ( 1) 00:18:53.663 17.351 - 17.446: 99.0537% ( 4) 00:18:53.663 17.446 - 17.541: 99.1084% ( 7) 00:18:53.663 17.541 - 17.636: 99.1319% ( 3) 00:18:53.663 17.636 - 17.730: 99.1944% ( 8) 00:18:53.663 17.730 - 17.825: 99.2335% ( 5) 00:18:53.663 17.825 - 17.920: 99.2648% ( 4) 00:18:53.663 17.920 - 18.015: 99.3274% ( 8) 00:18:53.663 18.015 - 18.110: 99.3978% ( 9) 00:18:53.663 18.110 - 18.204: 99.4525% ( 7) 00:18:53.663 18.204 - 18.299: 99.5151% ( 8) 00:18:53.663 18.299 - 18.394: 99.5542% ( 5) 00:18:53.663 18.394 - 18.489: 99.6089% ( 7) 00:18:53.663 18.489 - 18.584: 99.6715% ( 8) 00:18:53.663 18.584 - 18.679: 99.7184% ( 6) 00:18:53.663 18.679 - 18.773: 99.7419% ( 3) 00:18:53.663 18.773 - 18.868: 99.7810% ( 5) 00:18:53.663 18.868 - 18.963: 99.8045% ( 3) 00:18:53.663 18.963 - 19.058: 99.8436% ( 5) 00:18:53.663 19.153 - 19.247: 99.8514% ( 1) 00:18:53.663 19.342 - 19.437: 99.8592% ( 1) 00:18:53.663 19.627 - 19.721: 99.8749% ( 2) 00:18:53.663 19.911 - 20.006: 99.8827% ( 1) 00:18:53.663 21.144 - 21.239: 99.8905% ( 1) 00:18:53.663 22.756 - 22.850: 99.8983% ( 1) 00:18:53.663 25.221 - 25.410: 99.9061% ( 1) 00:18:53.663 27.117 - 27.307: 99.9140% ( 1) 00:18:53.663 3980.705 - 4004.978: 99.9765% ( 8) 00:18:53.663 4004.978 - 4029.250: 100.0000% ( 3) 00:18:53.663 00:18:53.663 Complete histogram 00:18:53.663 ================== 00:18:53.663 Range in us Cumulative Count 00:18:53.663 2.050 - 2.062: 0.2346% ( 30) 00:18:53.663 2.062 - 2.074: 28.3591% ( 3596) 00:18:53.663 2.074 - 2.086: 51.0558% ( 2902) 00:18:53.663 2.086 - 2.098: 52.5497% ( 191) 00:18:53.663 2.098 - 2.110: 57.0389% ( 574) 00:18:53.663 2.110 - 2.121: 59.3540% ( 296) 00:18:53.663 2.121 - 2.133: 62.1226% ( 354) 00:18:53.663 2.133 - 2.145: 73.1581% ( 1411) 00:18:53.663 2.145 - 2.157: 77.2173% ( 519) 00:18:53.663 2.157 - 2.169: 78.1714% ( 122) 00:18:53.663 2.169 - 2.181: 79.8764% ( 218) 00:18:53.663 2.181 - 2.193: 80.6194% ( 95) 00:18:53.663 2.193 - 2.204: 81.6049% ( 126) 00:18:53.663 2.204 - 2.216: 86.6260% ( 642) 00:18:53.663 2.216 - 2.228: 89.9578% ( 426) 00:18:53.663 2.228 - 2.240: 91.4751% ( 194) 00:18:53.663 2.240 - 2.252: 92.7499% ( 163) 00:18:53.663 2.252 - 2.264: 93.0940% ( 44) 00:18:53.663 2.264 - 2.276: 93.4460% ( 45) 00:18:53.663 2.276 - 2.287: 93.8605% ( 53) 00:18:53.663 2.287 - 2.299: 94.4236% ( 72) 00:18:53.663 2.299 - 2.311: 95.0571% ( 81) 00:18:53.663 2.311 - 2.323: 95.2526% ( 25) 00:18:53.663 2.323 - 2.335: 95.2761% ( 3) 00:18:53.663 2.335 - 2.347: 95.3465% ( 9) 00:18:53.663 2.347 - 2.359: 95.4560% ( 14) 00:18:53.663 2.359 - 2.370: 95.6359% ( 23) 00:18:53.663 2.370 - 2.382: 95.9878% ( 45) 00:18:53.663 2.382 - 2.394: 96.3085% ( 41) 00:18:53.663 2.394 - 2.406: 96.5978% ( 37) 00:18:53.663 2.406 - 2.418: 96.7621% ( 21) 00:18:53.663 2.418 - 2.430: 96.8950% ( 17) 00:18:53.663 2.430 - 2.441: 97.1219% ( 29) 00:18:53.663 2.441 - 2.453: 97.2626% ( 18) 00:18:53.663 2.453 - 2.465: 97.4894% ( 29) 00:18:53.663 2.465 - 2.477: 97.7006% ( 27) 00:18:53.663 2.477 - 2.489: 97.8805% ( 23) 00:18:53.663 2.489 - 2.501: 97.9822% ( 13) 00:18:53.663 2.501 - 2.513: 98.0369% ( 7) 00:18:53.663 2.513 - 2.524: 98.1151% ( 10) 00:18:53.663 2.524 - 2.536: 98.1621% ( 6) 00:18:53.663 2.536 - 2.548: 98.1933% ( 4) 00:18:53.663 2.548 - 2.560: 98.2246% ( 4) 00:18:53.663 2.560 - 2.572: 98.2403% ( 2) 00:18:53.663 2.572 - 2.584: 98.2637% ( 3) 00:18:53.663 2.596 - 2.607: 98.2715% ( 1) 00:18:53.663 2.631 - 2.643: 98.2794% ( 1) 00:18:53.663 2.667 - 2.679: 98.2872% ( 1) 00:18:53.663 2.714 - 2.726: 98.3028% ( 2) 00:18:53.663 2.773 - 2.785: 98.3107% ( 1) 00:18:53.663 2.809 - 2.821: 98.3185% ( 1) 00:18:53.663 2.844 - 2.856: 98.3341% ( 2) 00:18:53.663 2.868 - 2.880: 98.3419% ( 1) 00:18:53.663 3.034 - 3.058: 98.3498% ( 1) 00:18:53.663 3.319 - 3.342: 98.3576% ( 1) 00:18:53.663 3.366 - 3.390: 98.3654% ( 1) 00:18:53.663 3.390 - 3.413: 98.3732% ( 1) 00:18:53.663 3.413 - 3.437: 98.3889% ( 2) 00:18:53.663 3.461 - 3.484: 98.4280% ( 5) 00:18:53.663 3.484 - 3.508: 98.4358% ( 1) 00:18:53.663 3.508 - 3.532: 98.4436% ( 1) 00:18:53.663 3.556 - 3.579: 98.4514% ( 1) 00:18:53.663 3.579 - 3.603: 9[2024-10-11 22:41:56.621340] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:53.663 8.4749% ( 3) 00:18:53.663 3.627 - 3.650: 98.4984% ( 3) 00:18:53.663 3.721 - 3.745: 98.5062% ( 1) 00:18:53.663 3.745 - 3.769: 98.5140% ( 1) 00:18:53.663 3.769 - 3.793: 98.5218% ( 1) 00:18:53.663 3.840 - 3.864: 98.5296% ( 1) 00:18:53.663 3.864 - 3.887: 98.5531% ( 3) 00:18:53.663 3.887 - 3.911: 98.5609% ( 1) 00:18:53.663 3.911 - 3.935: 98.5687% ( 1) 00:18:53.663 4.030 - 4.053: 98.5766% ( 1) 00:18:53.663 4.504 - 4.527: 98.5844% ( 1) 00:18:53.663 5.073 - 5.096: 98.5922% ( 1) 00:18:53.663 5.689 - 5.713: 98.6000% ( 1) 00:18:53.663 6.068 - 6.116: 98.6079% ( 1) 00:18:53.663 6.210 - 6.258: 98.6157% ( 1) 00:18:53.663 6.353 - 6.400: 98.6313% ( 2) 00:18:53.663 6.495 - 6.542: 98.6391% ( 1) 00:18:53.663 6.542 - 6.590: 98.6470% ( 1) 00:18:53.663 6.637 - 6.684: 98.6548% ( 1) 00:18:53.663 6.827 - 6.874: 98.6626% ( 1) 00:18:53.663 6.874 - 6.921: 98.6704% ( 1) 00:18:53.663 6.921 - 6.969: 98.6861% ( 2) 00:18:53.663 7.159 - 7.206: 98.6939% ( 1) 00:18:53.663 7.490 - 7.538: 98.7095% ( 2) 00:18:53.663 7.538 - 7.585: 98.7173% ( 1) 00:18:53.663 9.007 - 9.055: 98.7252% ( 1) 00:18:53.663 9.387 - 9.434: 98.7330% ( 1) 00:18:53.663 13.179 - 13.274: 98.7408% ( 1) 00:18:53.663 15.550 - 15.644: 98.7486% ( 1) 00:18:53.663 15.644 - 15.739: 98.7565% ( 1) 00:18:53.663 15.739 - 15.834: 98.7643% ( 1) 00:18:53.663 15.834 - 15.929: 98.7877% ( 3) 00:18:53.663 15.929 - 16.024: 98.8190% ( 4) 00:18:53.663 16.024 - 16.119: 98.8581% ( 5) 00:18:53.663 16.119 - 16.213: 98.8738% ( 2) 00:18:53.663 16.213 - 16.308: 98.9051% ( 4) 00:18:53.663 16.308 - 16.403: 98.9442% ( 5) 00:18:53.663 16.403 - 16.498: 98.9911% ( 6) 00:18:53.663 16.498 - 16.593: 99.0458% ( 7) 00:18:53.663 16.593 - 16.687: 99.1162% ( 9) 00:18:53.663 16.687 - 16.782: 99.1319% ( 2) 00:18:53.663 16.782 - 16.877: 99.1631% ( 4) 00:18:53.663 16.877 - 16.972: 99.1944% ( 4) 00:18:53.663 17.067 - 17.161: 99.2101% ( 2) 00:18:53.663 17.161 - 17.256: 99.2414% ( 4) 00:18:53.663 17.256 - 17.351: 99.2726% ( 4) 00:18:53.663 17.446 - 17.541: 99.2805% ( 1) 00:18:53.663 17.730 - 17.825: 99.2883% ( 1) 00:18:53.663 17.825 - 17.920: 99.2961% ( 1) 00:18:53.663 18.015 - 18.110: 99.3039% ( 1) 00:18:53.663 18.299 - 18.394: 99.3117% ( 1) 00:18:53.663 18.584 - 18.679: 99.3196% ( 1) 00:18:53.663 20.196 - 20.290: 99.3274% ( 1) 00:18:53.663 22.281 - 22.376: 99.3352% ( 1) 00:18:53.663 23.230 - 23.324: 99.3430% ( 1) 00:18:53.663 2997.665 - 3009.801: 99.3509% ( 1) 00:18:53.663 3980.705 - 4004.978: 99.8670% ( 66) 00:18:53.663 4004.978 - 4029.250: 99.9922% ( 16) 00:18:53.663 4029.250 - 4053.523: 100.0000% ( 1) 00:18:53.663 00:18:53.663 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:53.663 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:53.663 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:53.663 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:53.663 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:53.922 [ 00:18:53.922 { 00:18:53.922 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:53.922 "subtype": "Discovery", 00:18:53.922 "listen_addresses": [], 00:18:53.922 "allow_any_host": true, 00:18:53.922 "hosts": [] 00:18:53.922 }, 00:18:53.922 { 00:18:53.922 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:53.922 "subtype": "NVMe", 00:18:53.922 "listen_addresses": [ 00:18:53.922 { 00:18:53.922 "trtype": "VFIOUSER", 00:18:53.922 "adrfam": "IPv4", 00:18:53.922 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:53.922 "trsvcid": "0" 00:18:53.922 } 00:18:53.922 ], 00:18:53.922 "allow_any_host": true, 00:18:53.922 "hosts": [], 00:18:53.922 "serial_number": "SPDK1", 00:18:53.922 "model_number": "SPDK bdev Controller", 00:18:53.922 "max_namespaces": 32, 00:18:53.922 "min_cntlid": 1, 00:18:53.922 "max_cntlid": 65519, 00:18:53.922 "namespaces": [ 00:18:53.922 { 00:18:53.922 "nsid": 1, 00:18:53.922 "bdev_name": "Malloc1", 00:18:53.922 "name": "Malloc1", 00:18:53.922 "nguid": "3B4C522334BE42F087926630CBB5C678", 00:18:53.922 "uuid": "3b4c5223-34be-42f0-8792-6630cbb5c678" 00:18:53.922 }, 00:18:53.922 { 00:18:53.922 "nsid": 2, 00:18:53.922 "bdev_name": "Malloc3", 00:18:53.922 "name": "Malloc3", 00:18:53.922 "nguid": "51B431CBE2D04819BA1BE68EF12C2178", 00:18:53.922 "uuid": "51b431cb-e2d0-4819-ba1b-e68ef12c2178" 00:18:53.922 } 00:18:53.922 ] 00:18:53.922 }, 00:18:53.922 { 00:18:53.922 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:53.922 "subtype": "NVMe", 00:18:53.922 "listen_addresses": [ 00:18:53.922 { 00:18:53.922 "trtype": "VFIOUSER", 00:18:53.922 "adrfam": "IPv4", 00:18:53.922 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:53.922 "trsvcid": "0" 00:18:53.922 } 00:18:53.922 ], 00:18:53.922 "allow_any_host": true, 00:18:53.922 "hosts": [], 00:18:53.922 "serial_number": "SPDK2", 00:18:53.922 "model_number": "SPDK bdev Controller", 00:18:53.922 "max_namespaces": 32, 00:18:53.922 "min_cntlid": 1, 00:18:53.922 "max_cntlid": 65519, 00:18:53.922 "namespaces": [ 00:18:53.922 { 00:18:53.922 "nsid": 1, 00:18:53.922 "bdev_name": "Malloc2", 00:18:53.922 "name": "Malloc2", 00:18:53.922 "nguid": "CD99579C57BF4F70905ED3C1C202908A", 00:18:53.922 "uuid": "cd99579c-57bf-4f70-905e-d3c1c202908a" 00:18:53.922 } 00:18:53.922 ] 00:18:53.922 } 00:18:53.922 ] 00:18:53.922 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:53.922 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=233618 00:18:53.922 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:53.922 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:53.922 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:53.922 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:53.922 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:18:53.922 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:18:53.922 22:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:53.922 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:53.922 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:18:53.922 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:18:53.922 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:53.922 [2024-10-11 22:41:57.095084] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:53.922 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:53.922 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:53.922 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:53.922 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:53.922 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:54.181 Malloc4 00:18:54.438 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:54.695 [2024-10-11 22:41:57.712893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:54.695 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:54.695 Asynchronous Event Request test 00:18:54.695 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:54.695 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:54.695 Registering asynchronous event callbacks... 00:18:54.695 Starting namespace attribute notice tests for all controllers... 00:18:54.695 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:54.695 aer_cb - Changed Namespace 00:18:54.695 Cleaning up... 00:18:54.954 [ 00:18:54.954 { 00:18:54.954 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:54.954 "subtype": "Discovery", 00:18:54.954 "listen_addresses": [], 00:18:54.954 "allow_any_host": true, 00:18:54.954 "hosts": [] 00:18:54.954 }, 00:18:54.954 { 00:18:54.954 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:54.954 "subtype": "NVMe", 00:18:54.954 "listen_addresses": [ 00:18:54.954 { 00:18:54.954 "trtype": "VFIOUSER", 00:18:54.954 "adrfam": "IPv4", 00:18:54.954 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:54.954 "trsvcid": "0" 00:18:54.954 } 00:18:54.954 ], 00:18:54.954 "allow_any_host": true, 00:18:54.954 "hosts": [], 00:18:54.954 "serial_number": "SPDK1", 00:18:54.954 "model_number": "SPDK bdev Controller", 00:18:54.954 "max_namespaces": 32, 00:18:54.954 "min_cntlid": 1, 00:18:54.954 "max_cntlid": 65519, 00:18:54.954 "namespaces": [ 00:18:54.954 { 00:18:54.954 "nsid": 1, 00:18:54.954 "bdev_name": "Malloc1", 00:18:54.954 "name": "Malloc1", 00:18:54.954 "nguid": "3B4C522334BE42F087926630CBB5C678", 00:18:54.954 "uuid": "3b4c5223-34be-42f0-8792-6630cbb5c678" 00:18:54.954 }, 00:18:54.954 { 00:18:54.954 "nsid": 2, 00:18:54.954 "bdev_name": "Malloc3", 00:18:54.954 "name": "Malloc3", 00:18:54.954 "nguid": "51B431CBE2D04819BA1BE68EF12C2178", 00:18:54.954 "uuid": "51b431cb-e2d0-4819-ba1b-e68ef12c2178" 00:18:54.954 } 00:18:54.954 ] 00:18:54.954 }, 00:18:54.954 { 00:18:54.954 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:54.954 "subtype": "NVMe", 00:18:54.954 "listen_addresses": [ 00:18:54.954 { 00:18:54.954 "trtype": "VFIOUSER", 00:18:54.954 "adrfam": "IPv4", 00:18:54.954 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:54.954 "trsvcid": "0" 00:18:54.954 } 00:18:54.954 ], 00:18:54.954 "allow_any_host": true, 00:18:54.954 "hosts": [], 00:18:54.954 "serial_number": "SPDK2", 00:18:54.954 "model_number": "SPDK bdev Controller", 00:18:54.954 "max_namespaces": 32, 00:18:54.954 "min_cntlid": 1, 00:18:54.954 "max_cntlid": 65519, 00:18:54.954 "namespaces": [ 00:18:54.954 { 00:18:54.954 "nsid": 1, 00:18:54.954 "bdev_name": "Malloc2", 00:18:54.954 "name": "Malloc2", 00:18:54.954 "nguid": "CD99579C57BF4F70905ED3C1C202908A", 00:18:54.954 "uuid": "cd99579c-57bf-4f70-905e-d3c1c202908a" 00:18:54.954 }, 00:18:54.954 { 00:18:54.954 "nsid": 2, 00:18:54.954 "bdev_name": "Malloc4", 00:18:54.954 "name": "Malloc4", 00:18:54.954 "nguid": "C1FDB816713E4913A9CD147E5D93CDE7", 00:18:54.954 "uuid": "c1fdb816-713e-4913-a9cd-147e5d93cde7" 00:18:54.954 } 00:18:54.954 ] 00:18:54.954 } 00:18:54.954 ] 00:18:54.954 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 233618 00:18:54.954 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:54.954 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 228023 00:18:54.954 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 228023 ']' 00:18:54.954 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 228023 00:18:54.954 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:54.954 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.954 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 228023 00:18:54.954 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:54.954 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:54.954 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 228023' 00:18:54.954 killing process with pid 228023 00:18:54.954 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 228023 00:18:54.954 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 228023 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=233764 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 233764' 00:18:55.212 Process pid: 233764 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 233764 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 233764 ']' 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:55.212 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:55.212 [2024-10-11 22:41:58.366879] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:55.212 [2024-10-11 22:41:58.367846] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:18:55.212 [2024-10-11 22:41:58.367896] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.212 [2024-10-11 22:41:58.424128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:55.212 [2024-10-11 22:41:58.468515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.212 [2024-10-11 22:41:58.468601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.212 [2024-10-11 22:41:58.468626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.212 [2024-10-11 22:41:58.468637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.212 [2024-10-11 22:41:58.468647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.212 [2024-10-11 22:41:58.470157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.212 [2024-10-11 22:41:58.470233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.212 [2024-10-11 22:41:58.470297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.212 [2024-10-11 22:41:58.470293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:55.473 [2024-10-11 22:41:58.559654] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:55.473 [2024-10-11 22:41:58.559857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:55.473 [2024-10-11 22:41:58.560208] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:55.473 [2024-10-11 22:41:58.560739] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:55.473 [2024-10-11 22:41:58.560995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:55.473 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:55.473 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:55.473 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:56.411 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:56.670 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:56.670 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:56.670 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:56.670 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:56.670 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:57.236 Malloc1 00:18:57.236 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:57.495 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:57.754 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:58.012 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:58.012 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:58.012 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:58.271 Malloc2 00:18:58.271 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:58.528 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:58.786 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:59.044 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:59.044 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 233764 00:18:59.044 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 233764 ']' 00:18:59.044 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 233764 00:18:59.044 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:59.044 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.044 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 233764 00:18:59.044 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.044 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.044 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 233764' 00:18:59.044 killing process with pid 233764 00:18:59.044 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 233764 00:18:59.044 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 233764 00:18:59.302 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:59.302 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:59.302 00:18:59.302 real 0m53.917s 00:18:59.302 user 3m28.589s 00:18:59.302 sys 0m4.030s 00:18:59.302 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.302 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:59.302 ************************************ 00:18:59.302 END TEST nvmf_vfio_user 00:18:59.302 ************************************ 00:18:59.302 22:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:59.302 22:42:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:59.302 22:42:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:59.302 22:42:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:59.302 ************************************ 00:18:59.302 START TEST nvmf_vfio_user_nvme_compliance 00:18:59.302 ************************************ 00:18:59.302 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:59.562 * Looking for test storage... 00:18:59.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:59.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.562 --rc genhtml_branch_coverage=1 00:18:59.562 --rc genhtml_function_coverage=1 00:18:59.562 --rc genhtml_legend=1 00:18:59.562 --rc geninfo_all_blocks=1 00:18:59.562 --rc geninfo_unexecuted_blocks=1 00:18:59.562 00:18:59.562 ' 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:59.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.562 --rc genhtml_branch_coverage=1 00:18:59.562 --rc genhtml_function_coverage=1 00:18:59.562 --rc genhtml_legend=1 00:18:59.562 --rc geninfo_all_blocks=1 00:18:59.562 --rc geninfo_unexecuted_blocks=1 00:18:59.562 00:18:59.562 ' 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:59.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.562 --rc genhtml_branch_coverage=1 00:18:59.562 --rc genhtml_function_coverage=1 00:18:59.562 --rc genhtml_legend=1 00:18:59.562 --rc geninfo_all_blocks=1 00:18:59.562 --rc geninfo_unexecuted_blocks=1 00:18:59.562 00:18:59.562 ' 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:59.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.562 --rc genhtml_branch_coverage=1 00:18:59.562 --rc genhtml_function_coverage=1 00:18:59.562 --rc genhtml_legend=1 00:18:59.562 --rc geninfo_all_blocks=1 00:18:59.562 --rc geninfo_unexecuted_blocks=1 00:18:59.562 00:18:59.562 ' 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.562 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=234477 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 234477' 00:18:59.563 Process pid: 234477 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 234477 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 234477 ']' 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.563 22:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:59.563 [2024-10-11 22:42:02.784396] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:18:59.563 [2024-10-11 22:42:02.784498] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.822 [2024-10-11 22:42:02.847360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:59.822 [2024-10-11 22:42:02.894721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.822 [2024-10-11 22:42:02.894768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.822 [2024-10-11 22:42:02.894793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.822 [2024-10-11 22:42:02.894805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.822 [2024-10-11 22:42:02.894838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.822 [2024-10-11 22:42:02.896169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.822 [2024-10-11 22:42:02.896284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.822 [2024-10-11 22:42:02.896289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.822 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:59.822 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:18:59.822 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.196 malloc0 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.196 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:01.196 00:19:01.196 00:19:01.196 CUnit - A unit testing framework for C - Version 2.1-3 00:19:01.196 http://cunit.sourceforge.net/ 00:19:01.196 00:19:01.196 00:19:01.196 Suite: nvme_compliance 00:19:01.196 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-11 22:42:04.273109] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.196 [2024-10-11 22:42:04.274623] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:01.196 [2024-10-11 22:42:04.274651] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:01.196 [2024-10-11 22:42:04.274664] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:01.196 [2024-10-11 22:42:04.276124] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.196 passed 00:19:01.196 Test: admin_identify_ctrlr_verify_fused ...[2024-10-11 22:42:04.359730] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.196 [2024-10-11 22:42:04.362745] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.196 passed 00:19:01.196 Test: admin_identify_ns ...[2024-10-11 22:42:04.452298] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.454 [2024-10-11 22:42:04.511566] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:01.454 [2024-10-11 22:42:04.519571] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:01.454 [2024-10-11 22:42:04.540693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.454 passed 00:19:01.454 Test: admin_get_features_mandatory_features ...[2024-10-11 22:42:04.624787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.454 [2024-10-11 22:42:04.627807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.454 passed 00:19:01.454 Test: admin_get_features_optional_features ...[2024-10-11 22:42:04.715371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.454 [2024-10-11 22:42:04.718392] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.712 passed 00:19:01.712 Test: admin_set_features_number_of_queues ...[2024-10-11 22:42:04.801846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.712 [2024-10-11 22:42:04.925662] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.712 passed 00:19:01.970 Test: admin_get_log_page_mandatory_logs ...[2024-10-11 22:42:05.009420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.970 [2024-10-11 22:42:05.012455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.970 passed 00:19:01.970 Test: admin_get_log_page_with_lpo ...[2024-10-11 22:42:05.094956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.970 [2024-10-11 22:42:05.166569] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:01.970 [2024-10-11 22:42:05.176652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.970 passed 00:19:02.228 Test: fabric_property_get ...[2024-10-11 22:42:05.264218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.228 [2024-10-11 22:42:05.265490] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:02.228 [2024-10-11 22:42:05.267240] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.228 passed 00:19:02.228 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-11 22:42:05.352807] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.228 [2024-10-11 22:42:05.354122] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:02.228 [2024-10-11 22:42:05.355845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.228 passed 00:19:02.228 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-11 22:42:05.438315] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.487 [2024-10-11 22:42:05.521566] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:02.487 [2024-10-11 22:42:05.537560] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:02.487 [2024-10-11 22:42:05.542688] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.487 passed 00:19:02.487 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-11 22:42:05.629196] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.487 [2024-10-11 22:42:05.630502] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:02.487 [2024-10-11 22:42:05.632227] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.487 passed 00:19:02.487 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-11 22:42:05.716679] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.745 [2024-10-11 22:42:05.792561] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:02.745 [2024-10-11 22:42:05.816559] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:02.745 [2024-10-11 22:42:05.821668] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.745 passed 00:19:02.745 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-11 22:42:05.905189] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.745 [2024-10-11 22:42:05.906488] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:02.745 [2024-10-11 22:42:05.906531] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:02.745 [2024-10-11 22:42:05.909284] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.745 passed 00:19:02.745 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-11 22:42:05.995403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.002 [2024-10-11 22:42:06.091558] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:03.002 [2024-10-11 22:42:06.099578] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:03.002 [2024-10-11 22:42:06.107564] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:03.002 [2024-10-11 22:42:06.115563] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:03.002 [2024-10-11 22:42:06.143652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:03.002 passed 00:19:03.002 Test: admin_create_io_sq_verify_pc ...[2024-10-11 22:42:06.228948] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.002 [2024-10-11 22:42:06.245575] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:03.002 [2024-10-11 22:42:06.263274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:03.260 passed 00:19:03.260 Test: admin_create_io_qp_max_qps ...[2024-10-11 22:42:06.345840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.193 [2024-10-11 22:42:07.452568] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:19:04.759 [2024-10-11 22:42:07.844612] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.759 passed 00:19:04.759 Test: admin_create_io_sq_shared_cq ...[2024-10-11 22:42:07.930359] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.017 [2024-10-11 22:42:08.068564] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:05.017 [2024-10-11 22:42:08.105658] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.017 passed 00:19:05.017 00:19:05.017 Run Summary: Type Total Ran Passed Failed Inactive 00:19:05.017 suites 1 1 n/a 0 0 00:19:05.017 tests 18 18 18 0 0 00:19:05.017 asserts 360 360 360 0 n/a 00:19:05.017 00:19:05.017 Elapsed time = 1.595 seconds 00:19:05.017 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 234477 00:19:05.017 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 234477 ']' 00:19:05.017 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 234477 00:19:05.017 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:19:05.017 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.017 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 234477 00:19:05.017 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:05.017 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:05.017 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 234477' 00:19:05.017 killing process with pid 234477 00:19:05.017 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 234477 00:19:05.017 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 234477 00:19:05.275 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:05.275 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:05.275 00:19:05.275 real 0m5.851s 00:19:05.275 user 0m16.404s 00:19:05.275 sys 0m0.584s 00:19:05.275 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:05.275 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:05.275 ************************************ 00:19:05.275 END TEST nvmf_vfio_user_nvme_compliance 00:19:05.275 ************************************ 00:19:05.275 22:42:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:05.275 22:42:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:05.275 22:42:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:05.275 22:42:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:05.275 ************************************ 00:19:05.275 START TEST nvmf_vfio_user_fuzz 00:19:05.275 ************************************ 00:19:05.275 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:05.275 * Looking for test storage... 00:19:05.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:05.275 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:05.275 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:19:05.275 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:05.535 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:05.535 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.535 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.535 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.535 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.535 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.535 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.535 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.535 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.535 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.535 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:05.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.536 --rc genhtml_branch_coverage=1 00:19:05.536 --rc genhtml_function_coverage=1 00:19:05.536 --rc genhtml_legend=1 00:19:05.536 --rc geninfo_all_blocks=1 00:19:05.536 --rc geninfo_unexecuted_blocks=1 00:19:05.536 00:19:05.536 ' 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:05.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.536 --rc genhtml_branch_coverage=1 00:19:05.536 --rc genhtml_function_coverage=1 00:19:05.536 --rc genhtml_legend=1 00:19:05.536 --rc geninfo_all_blocks=1 00:19:05.536 --rc geninfo_unexecuted_blocks=1 00:19:05.536 00:19:05.536 ' 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:05.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.536 --rc genhtml_branch_coverage=1 00:19:05.536 --rc genhtml_function_coverage=1 00:19:05.536 --rc genhtml_legend=1 00:19:05.536 --rc geninfo_all_blocks=1 00:19:05.536 --rc geninfo_unexecuted_blocks=1 00:19:05.536 00:19:05.536 ' 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:05.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.536 --rc genhtml_branch_coverage=1 00:19:05.536 --rc genhtml_function_coverage=1 00:19:05.536 --rc genhtml_legend=1 00:19:05.536 --rc geninfo_all_blocks=1 00:19:05.536 --rc geninfo_unexecuted_blocks=1 00:19:05.536 00:19:05.536 ' 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:05.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=235642 00:19:05.536 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:05.537 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 235642' 00:19:05.537 Process pid: 235642 00:19:05.537 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:05.537 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 235642 00:19:05.537 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 235642 ']' 00:19:05.537 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.537 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.537 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.537 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.537 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:05.795 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.796 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:19:05.796 22:42:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.731 malloc0 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:06.731 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:38.802 Fuzzing completed. Shutting down the fuzz application 00:19:38.802 00:19:38.802 Dumping successful admin opcodes: 00:19:38.802 8, 9, 10, 24, 00:19:38.802 Dumping successful io opcodes: 00:19:38.802 0, 00:19:38.802 NS: 0x20000081ef00 I/O qp, Total commands completed: 661429, total successful commands: 2581, random_seed: 1566412160 00:19:38.802 NS: 0x20000081ef00 admin qp, Total commands completed: 113958, total successful commands: 931, random_seed: 1479833984 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 235642 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 235642 ']' 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 235642 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 235642 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 235642' 00:19:38.802 killing process with pid 235642 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 235642 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 235642 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:38.802 00:19:38.802 real 0m32.171s 00:19:38.802 user 0m29.655s 00:19:38.802 sys 0m29.617s 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:38.802 ************************************ 00:19:38.802 END TEST nvmf_vfio_user_fuzz 00:19:38.802 ************************************ 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:38.802 ************************************ 00:19:38.802 START TEST nvmf_auth_target 00:19:38.802 ************************************ 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:38.802 * Looking for test storage... 00:19:38.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:38.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.802 --rc genhtml_branch_coverage=1 00:19:38.802 --rc genhtml_function_coverage=1 00:19:38.802 --rc genhtml_legend=1 00:19:38.802 --rc geninfo_all_blocks=1 00:19:38.802 --rc geninfo_unexecuted_blocks=1 00:19:38.802 00:19:38.802 ' 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:38.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.802 --rc genhtml_branch_coverage=1 00:19:38.802 --rc genhtml_function_coverage=1 00:19:38.802 --rc genhtml_legend=1 00:19:38.802 --rc geninfo_all_blocks=1 00:19:38.802 --rc geninfo_unexecuted_blocks=1 00:19:38.802 00:19:38.802 ' 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:38.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.802 --rc genhtml_branch_coverage=1 00:19:38.802 --rc genhtml_function_coverage=1 00:19:38.802 --rc genhtml_legend=1 00:19:38.802 --rc geninfo_all_blocks=1 00:19:38.802 --rc geninfo_unexecuted_blocks=1 00:19:38.802 00:19:38.802 ' 00:19:38.802 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:38.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.802 --rc genhtml_branch_coverage=1 00:19:38.802 --rc genhtml_function_coverage=1 00:19:38.802 --rc genhtml_legend=1 00:19:38.802 --rc geninfo_all_blocks=1 00:19:38.802 --rc geninfo_unexecuted_blocks=1 00:19:38.802 00:19:38.802 ' 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:38.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:38.803 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.740 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:39.740 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:39.740 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.740 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.741 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.741 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.741 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.741 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:39.741 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:39.741 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:39.741 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:39.741 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.741 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:39.741 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.741 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:40.000 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:40.000 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:19:40.000 00:19:40.000 --- 10.0.0.2 ping statistics --- 00:19:40.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.000 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:19:40.000 00:19:40.000 --- 10.0.0.1 ping statistics --- 00:19:40.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.000 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=241157 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 241157 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:40.000 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 241157 ']' 00:19:40.001 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.001 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.001 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.001 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.001 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=241297 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b33e5d6a512553019ee1323f4999229730a2f2e0290cf931 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.WDt 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b33e5d6a512553019ee1323f4999229730a2f2e0290cf931 0 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b33e5d6a512553019ee1323f4999229730a2f2e0290cf931 0 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b33e5d6a512553019ee1323f4999229730a2f2e0290cf931 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:40.259 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.WDt 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.WDt 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.WDt 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=083597d9b327c6f2c55938fed9f2944971c2f5fb9743691efbcce17f950e4f04 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.BV1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 083597d9b327c6f2c55938fed9f2944971c2f5fb9743691efbcce17f950e4f04 3 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 083597d9b327c6f2c55938fed9f2944971c2f5fb9743691efbcce17f950e4f04 3 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=083597d9b327c6f2c55938fed9f2944971c2f5fb9743691efbcce17f950e4f04 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.BV1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.BV1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.BV1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=76404562b3fd1d69b299d7dc6b3f8f22 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.PI1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 76404562b3fd1d69b299d7dc6b3f8f22 1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 76404562b3fd1d69b299d7dc6b3f8f22 1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=76404562b3fd1d69b299d7dc6b3f8f22 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.PI1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.PI1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.PI1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=26716b0ae9eb4015d5ed66f0f3599ea64442c1fb233dbde2 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.2Dv 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 26716b0ae9eb4015d5ed66f0f3599ea64442c1fb233dbde2 2 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 26716b0ae9eb4015d5ed66f0f3599ea64442c1fb233dbde2 2 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=26716b0ae9eb4015d5ed66f0f3599ea64442c1fb233dbde2 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.2Dv 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.2Dv 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.2Dv 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=47b86868f6d2fa50446de9508e9a60da5a6c44d07923cab5 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.y1q 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 47b86868f6d2fa50446de9508e9a60da5a6c44d07923cab5 2 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 47b86868f6d2fa50446de9508e9a60da5a6c44d07923cab5 2 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=47b86868f6d2fa50446de9508e9a60da5a6c44d07923cab5 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.y1q 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.y1q 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.y1q 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=bb3b4dc2b364a23c345d98233894d3d8 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.JEc 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key bb3b4dc2b364a23c345d98233894d3d8 1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 bb3b4dc2b364a23c345d98233894d3d8 1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=bb3b4dc2b364a23c345d98233894d3d8 00:19:40.519 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.JEc 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.JEc 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.JEc 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=c3bc0af2dfa5443b82370fe0379e43fd2a1cbec2eb5471466936b5ace1d64fc7 00:19:40.520 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.7HN 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key c3bc0af2dfa5443b82370fe0379e43fd2a1cbec2eb5471466936b5ace1d64fc7 3 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 c3bc0af2dfa5443b82370fe0379e43fd2a1cbec2eb5471466936b5ace1d64fc7 3 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=c3bc0af2dfa5443b82370fe0379e43fd2a1cbec2eb5471466936b5ace1d64fc7 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.7HN 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.7HN 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.7HN 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 241157 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 241157 ']' 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.778 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.037 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.037 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:41.037 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 241297 /var/tmp/host.sock 00:19:41.037 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 241297 ']' 00:19:41.037 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:41.037 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.037 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:41.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:41.037 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.037 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.WDt 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.WDt 00:19:41.295 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.WDt 00:19:41.553 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.BV1 ]] 00:19:41.553 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BV1 00:19:41.553 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.553 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.553 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.553 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BV1 00:19:41.553 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BV1 00:19:41.811 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:41.811 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.PI1 00:19:41.811 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.811 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.811 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.811 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.PI1 00:19:41.811 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.PI1 00:19:42.069 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.2Dv ]] 00:19:42.069 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2Dv 00:19:42.069 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.069 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.069 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.069 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2Dv 00:19:42.069 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2Dv 00:19:42.328 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:42.328 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.y1q 00:19:42.328 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.328 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.328 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.328 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.y1q 00:19:42.328 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.y1q 00:19:42.586 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.JEc ]] 00:19:42.586 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JEc 00:19:42.586 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.586 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.586 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.586 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JEc 00:19:42.586 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JEc 00:19:42.843 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:42.843 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.7HN 00:19:42.843 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.843 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.843 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.843 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.7HN 00:19:42.843 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.7HN 00:19:43.100 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:43.100 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:43.100 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.100 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.100 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:43.100 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:43.358 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:43.358 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.358 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.358 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:43.358 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:43.358 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.358 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.358 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.358 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.616 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.616 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.616 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.616 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.873 00:19:43.873 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.873 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.873 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.131 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.131 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.131 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.131 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.131 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.131 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.131 { 00:19:44.131 "cntlid": 1, 00:19:44.131 "qid": 0, 00:19:44.131 "state": "enabled", 00:19:44.131 "thread": "nvmf_tgt_poll_group_000", 00:19:44.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.131 "listen_address": { 00:19:44.131 "trtype": "TCP", 00:19:44.131 "adrfam": "IPv4", 00:19:44.131 "traddr": "10.0.0.2", 00:19:44.131 "trsvcid": "4420" 00:19:44.131 }, 00:19:44.131 "peer_address": { 00:19:44.131 "trtype": "TCP", 00:19:44.131 "adrfam": "IPv4", 00:19:44.131 "traddr": "10.0.0.1", 00:19:44.131 "trsvcid": "46260" 00:19:44.131 }, 00:19:44.131 "auth": { 00:19:44.131 "state": "completed", 00:19:44.131 "digest": "sha256", 00:19:44.131 "dhgroup": "null" 00:19:44.131 } 00:19:44.131 } 00:19:44.131 ]' 00:19:44.131 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.131 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.131 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.131 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:44.131 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.132 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.132 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.132 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.389 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:19:44.389 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.655 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.655 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.913 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.913 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.913 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.913 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.913 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.913 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.913 { 00:19:49.913 "cntlid": 3, 00:19:49.913 "qid": 0, 00:19:49.913 "state": "enabled", 00:19:49.913 "thread": "nvmf_tgt_poll_group_000", 00:19:49.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:49.913 "listen_address": { 00:19:49.913 "trtype": "TCP", 00:19:49.913 "adrfam": "IPv4", 00:19:49.913 "traddr": "10.0.0.2", 00:19:49.913 "trsvcid": "4420" 00:19:49.913 }, 00:19:49.913 "peer_address": { 00:19:49.913 "trtype": "TCP", 00:19:49.913 "adrfam": "IPv4", 00:19:49.913 "traddr": "10.0.0.1", 00:19:49.913 "trsvcid": "34546" 00:19:49.913 }, 00:19:49.913 "auth": { 00:19:49.913 "state": "completed", 00:19:49.913 "digest": "sha256", 00:19:49.913 "dhgroup": "null" 00:19:49.913 } 00:19:49.913 } 00:19:49.913 ]' 00:19:49.913 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.913 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.913 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.171 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:50.171 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.171 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.171 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.171 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.429 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:19:50.429 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:19:51.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.620 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:51.620 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.620 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.620 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:51.621 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:51.621 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.621 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.621 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.621 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.621 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.621 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.621 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.621 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.879 00:19:51.879 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.879 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.879 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.138 { 00:19:52.138 "cntlid": 5, 00:19:52.138 "qid": 0, 00:19:52.138 "state": "enabled", 00:19:52.138 "thread": "nvmf_tgt_poll_group_000", 00:19:52.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:52.138 "listen_address": { 00:19:52.138 "trtype": "TCP", 00:19:52.138 "adrfam": "IPv4", 00:19:52.138 "traddr": "10.0.0.2", 00:19:52.138 "trsvcid": "4420" 00:19:52.138 }, 00:19:52.138 "peer_address": { 00:19:52.138 "trtype": "TCP", 00:19:52.138 "adrfam": "IPv4", 00:19:52.138 "traddr": "10.0.0.1", 00:19:52.138 "trsvcid": "34562" 00:19:52.138 }, 00:19:52.138 "auth": { 00:19:52.138 "state": "completed", 00:19:52.138 "digest": "sha256", 00:19:52.138 "dhgroup": "null" 00:19:52.138 } 00:19:52.138 } 00:19:52.138 ]' 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.138 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.704 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:19:52.704 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:19:53.269 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.527 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.527 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.527 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.527 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.527 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.527 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.527 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.785 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.043 00:19:54.043 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.043 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.043 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.301 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.301 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.301 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.301 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.301 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.301 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.301 { 00:19:54.301 "cntlid": 7, 00:19:54.301 "qid": 0, 00:19:54.301 "state": "enabled", 00:19:54.301 "thread": "nvmf_tgt_poll_group_000", 00:19:54.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:54.301 "listen_address": { 00:19:54.301 "trtype": "TCP", 00:19:54.301 "adrfam": "IPv4", 00:19:54.301 "traddr": "10.0.0.2", 00:19:54.301 "trsvcid": "4420" 00:19:54.301 }, 00:19:54.301 "peer_address": { 00:19:54.301 "trtype": "TCP", 00:19:54.301 "adrfam": "IPv4", 00:19:54.301 "traddr": "10.0.0.1", 00:19:54.301 "trsvcid": "34578" 00:19:54.301 }, 00:19:54.301 "auth": { 00:19:54.301 "state": "completed", 00:19:54.301 "digest": "sha256", 00:19:54.301 "dhgroup": "null" 00:19:54.301 } 00:19:54.301 } 00:19:54.301 ]' 00:19:54.301 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.301 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.301 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.301 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:54.302 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.560 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.560 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.560 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.818 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:19:54.818 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:19:55.753 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.753 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.753 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.753 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.753 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.753 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.753 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.753 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.753 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.753 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.320 00:19:56.320 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.320 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.320 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.578 { 00:19:56.578 "cntlid": 9, 00:19:56.578 "qid": 0, 00:19:56.578 "state": "enabled", 00:19:56.578 "thread": "nvmf_tgt_poll_group_000", 00:19:56.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.578 "listen_address": { 00:19:56.578 "trtype": "TCP", 00:19:56.578 "adrfam": "IPv4", 00:19:56.578 "traddr": "10.0.0.2", 00:19:56.578 "trsvcid": "4420" 00:19:56.578 }, 00:19:56.578 "peer_address": { 00:19:56.578 "trtype": "TCP", 00:19:56.578 "adrfam": "IPv4", 00:19:56.578 "traddr": "10.0.0.1", 00:19:56.578 "trsvcid": "34608" 00:19:56.578 }, 00:19:56.578 "auth": { 00:19:56.578 "state": "completed", 00:19:56.578 "digest": "sha256", 00:19:56.578 "dhgroup": "ffdhe2048" 00:19:56.578 } 00:19:56.578 } 00:19:56.578 ]' 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.578 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.836 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:19:56.836 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:19:57.770 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.770 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.770 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.770 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.770 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.770 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.770 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.770 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.028 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:58.029 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.029 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.029 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:58.029 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:58.029 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.029 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.029 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.029 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.029 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.029 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.029 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.029 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.286 00:19:58.545 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.545 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.545 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.803 { 00:19:58.803 "cntlid": 11, 00:19:58.803 "qid": 0, 00:19:58.803 "state": "enabled", 00:19:58.803 "thread": "nvmf_tgt_poll_group_000", 00:19:58.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.803 "listen_address": { 00:19:58.803 "trtype": "TCP", 00:19:58.803 "adrfam": "IPv4", 00:19:58.803 "traddr": "10.0.0.2", 00:19:58.803 "trsvcid": "4420" 00:19:58.803 }, 00:19:58.803 "peer_address": { 00:19:58.803 "trtype": "TCP", 00:19:58.803 "adrfam": "IPv4", 00:19:58.803 "traddr": "10.0.0.1", 00:19:58.803 "trsvcid": "34624" 00:19:58.803 }, 00:19:58.803 "auth": { 00:19:58.803 "state": "completed", 00:19:58.803 "digest": "sha256", 00:19:58.803 "dhgroup": "ffdhe2048" 00:19:58.803 } 00:19:58.803 } 00:19:58.803 ]' 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.803 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.061 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:19:59.061 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:19:59.995 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.995 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.995 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.995 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.995 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.995 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.995 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.995 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.254 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.512 00:20:00.771 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.771 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.771 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.029 { 00:20:01.029 "cntlid": 13, 00:20:01.029 "qid": 0, 00:20:01.029 "state": "enabled", 00:20:01.029 "thread": "nvmf_tgt_poll_group_000", 00:20:01.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:01.029 "listen_address": { 00:20:01.029 "trtype": "TCP", 00:20:01.029 "adrfam": "IPv4", 00:20:01.029 "traddr": "10.0.0.2", 00:20:01.029 "trsvcid": "4420" 00:20:01.029 }, 00:20:01.029 "peer_address": { 00:20:01.029 "trtype": "TCP", 00:20:01.029 "adrfam": "IPv4", 00:20:01.029 "traddr": "10.0.0.1", 00:20:01.029 "trsvcid": "40874" 00:20:01.029 }, 00:20:01.029 "auth": { 00:20:01.029 "state": "completed", 00:20:01.029 "digest": "sha256", 00:20:01.029 "dhgroup": "ffdhe2048" 00:20:01.029 } 00:20:01.029 } 00:20:01.029 ]' 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.029 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.287 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:01.287 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:02.221 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.221 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.221 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.222 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.222 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.222 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.222 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.222 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.480 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.738 00:20:02.738 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.738 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.738 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.995 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.996 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.996 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.996 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.996 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.996 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.996 { 00:20:02.996 "cntlid": 15, 00:20:02.996 "qid": 0, 00:20:02.996 "state": "enabled", 00:20:02.996 "thread": "nvmf_tgt_poll_group_000", 00:20:02.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:02.996 "listen_address": { 00:20:02.996 "trtype": "TCP", 00:20:02.996 "adrfam": "IPv4", 00:20:02.996 "traddr": "10.0.0.2", 00:20:02.996 "trsvcid": "4420" 00:20:02.996 }, 00:20:02.996 "peer_address": { 00:20:02.996 "trtype": "TCP", 00:20:02.996 "adrfam": "IPv4", 00:20:02.996 "traddr": "10.0.0.1", 00:20:02.996 "trsvcid": "40902" 00:20:02.996 }, 00:20:02.996 "auth": { 00:20:02.996 "state": "completed", 00:20:02.996 "digest": "sha256", 00:20:02.996 "dhgroup": "ffdhe2048" 00:20:02.996 } 00:20:02.996 } 00:20:02.996 ]' 00:20:02.996 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.254 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.254 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.254 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.254 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.254 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.254 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.254 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.512 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:03.512 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:04.445 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.445 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.445 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.445 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.445 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.445 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.445 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.445 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.445 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.703 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.960 00:20:04.960 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.960 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.960 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.218 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.218 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.218 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.218 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.218 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.218 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.218 { 00:20:05.218 "cntlid": 17, 00:20:05.218 "qid": 0, 00:20:05.218 "state": "enabled", 00:20:05.218 "thread": "nvmf_tgt_poll_group_000", 00:20:05.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.218 "listen_address": { 00:20:05.218 "trtype": "TCP", 00:20:05.218 "adrfam": "IPv4", 00:20:05.218 "traddr": "10.0.0.2", 00:20:05.218 "trsvcid": "4420" 00:20:05.218 }, 00:20:05.218 "peer_address": { 00:20:05.218 "trtype": "TCP", 00:20:05.218 "adrfam": "IPv4", 00:20:05.218 "traddr": "10.0.0.1", 00:20:05.218 "trsvcid": "40932" 00:20:05.218 }, 00:20:05.218 "auth": { 00:20:05.218 "state": "completed", 00:20:05.218 "digest": "sha256", 00:20:05.218 "dhgroup": "ffdhe3072" 00:20:05.218 } 00:20:05.218 } 00:20:05.218 ]' 00:20:05.219 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.219 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.219 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.477 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.477 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.477 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.477 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.477 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.734 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:20:05.734 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:20:06.669 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.669 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.669 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.669 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.669 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.669 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.669 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.669 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.927 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:06.927 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.928 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.928 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:06.928 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:06.928 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.928 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.928 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.928 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.928 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.928 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.928 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.928 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.185 00:20:07.185 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.185 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.185 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.443 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.443 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.443 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.443 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.443 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.443 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.443 { 00:20:07.443 "cntlid": 19, 00:20:07.443 "qid": 0, 00:20:07.443 "state": "enabled", 00:20:07.443 "thread": "nvmf_tgt_poll_group_000", 00:20:07.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.443 "listen_address": { 00:20:07.443 "trtype": "TCP", 00:20:07.443 "adrfam": "IPv4", 00:20:07.443 "traddr": "10.0.0.2", 00:20:07.443 "trsvcid": "4420" 00:20:07.443 }, 00:20:07.443 "peer_address": { 00:20:07.443 "trtype": "TCP", 00:20:07.443 "adrfam": "IPv4", 00:20:07.443 "traddr": "10.0.0.1", 00:20:07.443 "trsvcid": "40962" 00:20:07.443 }, 00:20:07.443 "auth": { 00:20:07.443 "state": "completed", 00:20:07.443 "digest": "sha256", 00:20:07.443 "dhgroup": "ffdhe3072" 00:20:07.443 } 00:20:07.443 } 00:20:07.443 ]' 00:20:07.443 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.443 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.443 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.700 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.700 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.700 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.700 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.700 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.958 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:20:07.958 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:20:08.891 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.891 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.891 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.891 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.891 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.891 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.891 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:08.891 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:09.149 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:09.149 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.149 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.149 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:09.149 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:09.149 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.149 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.149 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.149 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.150 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.150 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.150 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.150 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.407 00:20:09.407 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.407 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.407 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.665 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.665 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.665 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.665 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.665 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.665 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.665 { 00:20:09.665 "cntlid": 21, 00:20:09.665 "qid": 0, 00:20:09.665 "state": "enabled", 00:20:09.665 "thread": "nvmf_tgt_poll_group_000", 00:20:09.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:09.665 "listen_address": { 00:20:09.665 "trtype": "TCP", 00:20:09.665 "adrfam": "IPv4", 00:20:09.665 "traddr": "10.0.0.2", 00:20:09.665 "trsvcid": "4420" 00:20:09.665 }, 00:20:09.665 "peer_address": { 00:20:09.665 "trtype": "TCP", 00:20:09.665 "adrfam": "IPv4", 00:20:09.665 "traddr": "10.0.0.1", 00:20:09.665 "trsvcid": "43174" 00:20:09.665 }, 00:20:09.665 "auth": { 00:20:09.665 "state": "completed", 00:20:09.665 "digest": "sha256", 00:20:09.665 "dhgroup": "ffdhe3072" 00:20:09.665 } 00:20:09.665 } 00:20:09.665 ]' 00:20:09.666 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.666 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.666 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.666 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.666 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.666 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.666 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.666 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.231 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:10.231 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:11.164 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.164 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.164 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.164 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.164 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.164 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.164 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:11.164 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:11.164 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:11.164 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.164 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.165 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:11.165 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.165 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.165 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:11.165 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.165 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.165 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.165 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.165 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.165 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.730 00:20:11.730 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.730 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.730 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.988 { 00:20:11.988 "cntlid": 23, 00:20:11.988 "qid": 0, 00:20:11.988 "state": "enabled", 00:20:11.988 "thread": "nvmf_tgt_poll_group_000", 00:20:11.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:11.988 "listen_address": { 00:20:11.988 "trtype": "TCP", 00:20:11.988 "adrfam": "IPv4", 00:20:11.988 "traddr": "10.0.0.2", 00:20:11.988 "trsvcid": "4420" 00:20:11.988 }, 00:20:11.988 "peer_address": { 00:20:11.988 "trtype": "TCP", 00:20:11.988 "adrfam": "IPv4", 00:20:11.988 "traddr": "10.0.0.1", 00:20:11.988 "trsvcid": "43192" 00:20:11.988 }, 00:20:11.988 "auth": { 00:20:11.988 "state": "completed", 00:20:11.988 "digest": "sha256", 00:20:11.988 "dhgroup": "ffdhe3072" 00:20:11.988 } 00:20:11.988 } 00:20:11.988 ]' 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.988 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.246 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:12.246 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:13.178 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.178 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.178 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.178 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.178 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.178 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.178 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.178 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.178 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.436 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.001 00:20:14.001 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.001 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.001 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.259 { 00:20:14.259 "cntlid": 25, 00:20:14.259 "qid": 0, 00:20:14.259 "state": "enabled", 00:20:14.259 "thread": "nvmf_tgt_poll_group_000", 00:20:14.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:14.259 "listen_address": { 00:20:14.259 "trtype": "TCP", 00:20:14.259 "adrfam": "IPv4", 00:20:14.259 "traddr": "10.0.0.2", 00:20:14.259 "trsvcid": "4420" 00:20:14.259 }, 00:20:14.259 "peer_address": { 00:20:14.259 "trtype": "TCP", 00:20:14.259 "adrfam": "IPv4", 00:20:14.259 "traddr": "10.0.0.1", 00:20:14.259 "trsvcid": "43220" 00:20:14.259 }, 00:20:14.259 "auth": { 00:20:14.259 "state": "completed", 00:20:14.259 "digest": "sha256", 00:20:14.259 "dhgroup": "ffdhe4096" 00:20:14.259 } 00:20:14.259 } 00:20:14.259 ]' 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.259 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.517 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:20:14.517 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:20:15.449 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.449 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.449 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.449 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.449 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.449 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.449 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.449 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.707 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.965 00:20:16.223 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.223 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.223 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.481 { 00:20:16.481 "cntlid": 27, 00:20:16.481 "qid": 0, 00:20:16.481 "state": "enabled", 00:20:16.481 "thread": "nvmf_tgt_poll_group_000", 00:20:16.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:16.481 "listen_address": { 00:20:16.481 "trtype": "TCP", 00:20:16.481 "adrfam": "IPv4", 00:20:16.481 "traddr": "10.0.0.2", 00:20:16.481 "trsvcid": "4420" 00:20:16.481 }, 00:20:16.481 "peer_address": { 00:20:16.481 "trtype": "TCP", 00:20:16.481 "adrfam": "IPv4", 00:20:16.481 "traddr": "10.0.0.1", 00:20:16.481 "trsvcid": "43244" 00:20:16.481 }, 00:20:16.481 "auth": { 00:20:16.481 "state": "completed", 00:20:16.481 "digest": "sha256", 00:20:16.481 "dhgroup": "ffdhe4096" 00:20:16.481 } 00:20:16.481 } 00:20:16.481 ]' 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.481 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.739 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:20:16.739 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:20:17.673 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.673 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.673 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.673 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.673 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.673 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.673 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.673 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.932 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.498 00:20:18.498 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.498 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.498 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.756 { 00:20:18.756 "cntlid": 29, 00:20:18.756 "qid": 0, 00:20:18.756 "state": "enabled", 00:20:18.756 "thread": "nvmf_tgt_poll_group_000", 00:20:18.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:18.756 "listen_address": { 00:20:18.756 "trtype": "TCP", 00:20:18.756 "adrfam": "IPv4", 00:20:18.756 "traddr": "10.0.0.2", 00:20:18.756 "trsvcid": "4420" 00:20:18.756 }, 00:20:18.756 "peer_address": { 00:20:18.756 "trtype": "TCP", 00:20:18.756 "adrfam": "IPv4", 00:20:18.756 "traddr": "10.0.0.1", 00:20:18.756 "trsvcid": "43274" 00:20:18.756 }, 00:20:18.756 "auth": { 00:20:18.756 "state": "completed", 00:20:18.756 "digest": "sha256", 00:20:18.756 "dhgroup": "ffdhe4096" 00:20:18.756 } 00:20:18.756 } 00:20:18.756 ]' 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.756 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.014 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:19.014 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:19.948 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.948 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.948 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.948 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.948 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.948 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.948 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:19.948 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.206 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.464 00:20:20.723 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.723 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.723 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.981 { 00:20:20.981 "cntlid": 31, 00:20:20.981 "qid": 0, 00:20:20.981 "state": "enabled", 00:20:20.981 "thread": "nvmf_tgt_poll_group_000", 00:20:20.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:20.981 "listen_address": { 00:20:20.981 "trtype": "TCP", 00:20:20.981 "adrfam": "IPv4", 00:20:20.981 "traddr": "10.0.0.2", 00:20:20.981 "trsvcid": "4420" 00:20:20.981 }, 00:20:20.981 "peer_address": { 00:20:20.981 "trtype": "TCP", 00:20:20.981 "adrfam": "IPv4", 00:20:20.981 "traddr": "10.0.0.1", 00:20:20.981 "trsvcid": "37242" 00:20:20.981 }, 00:20:20.981 "auth": { 00:20:20.981 "state": "completed", 00:20:20.981 "digest": "sha256", 00:20:20.981 "dhgroup": "ffdhe4096" 00:20:20.981 } 00:20:20.981 } 00:20:20.981 ]' 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.981 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.239 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:21.239 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:22.172 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.172 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.172 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.172 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.172 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.172 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.172 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.172 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.172 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.431 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.996 00:20:22.996 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.996 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.996 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.254 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.254 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.254 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.254 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.254 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.254 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.254 { 00:20:23.254 "cntlid": 33, 00:20:23.254 "qid": 0, 00:20:23.254 "state": "enabled", 00:20:23.254 "thread": "nvmf_tgt_poll_group_000", 00:20:23.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:23.254 "listen_address": { 00:20:23.254 "trtype": "TCP", 00:20:23.254 "adrfam": "IPv4", 00:20:23.254 "traddr": "10.0.0.2", 00:20:23.254 "trsvcid": "4420" 00:20:23.254 }, 00:20:23.254 "peer_address": { 00:20:23.254 "trtype": "TCP", 00:20:23.254 "adrfam": "IPv4", 00:20:23.254 "traddr": "10.0.0.1", 00:20:23.254 "trsvcid": "37262" 00:20:23.254 }, 00:20:23.254 "auth": { 00:20:23.254 "state": "completed", 00:20:23.254 "digest": "sha256", 00:20:23.254 "dhgroup": "ffdhe6144" 00:20:23.254 } 00:20:23.254 } 00:20:23.254 ]' 00:20:23.254 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.254 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.254 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.254 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.254 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.512 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.512 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.512 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.771 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:20:23.771 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:20:24.706 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.706 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.706 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.706 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.706 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.706 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.706 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.706 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.964 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.530 00:20:25.530 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.530 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.530 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.789 { 00:20:25.789 "cntlid": 35, 00:20:25.789 "qid": 0, 00:20:25.789 "state": "enabled", 00:20:25.789 "thread": "nvmf_tgt_poll_group_000", 00:20:25.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:25.789 "listen_address": { 00:20:25.789 "trtype": "TCP", 00:20:25.789 "adrfam": "IPv4", 00:20:25.789 "traddr": "10.0.0.2", 00:20:25.789 "trsvcid": "4420" 00:20:25.789 }, 00:20:25.789 "peer_address": { 00:20:25.789 "trtype": "TCP", 00:20:25.789 "adrfam": "IPv4", 00:20:25.789 "traddr": "10.0.0.1", 00:20:25.789 "trsvcid": "37284" 00:20:25.789 }, 00:20:25.789 "auth": { 00:20:25.789 "state": "completed", 00:20:25.789 "digest": "sha256", 00:20:25.789 "dhgroup": "ffdhe6144" 00:20:25.789 } 00:20:25.789 } 00:20:25.789 ]' 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.789 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.047 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:20:26.047 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:20:26.982 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.982 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.982 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.982 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.982 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.982 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.982 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.982 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.240 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.806 00:20:27.806 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.806 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.806 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.065 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.065 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.065 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.065 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.065 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.065 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.065 { 00:20:28.065 "cntlid": 37, 00:20:28.065 "qid": 0, 00:20:28.065 "state": "enabled", 00:20:28.065 "thread": "nvmf_tgt_poll_group_000", 00:20:28.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:28.065 "listen_address": { 00:20:28.065 "trtype": "TCP", 00:20:28.065 "adrfam": "IPv4", 00:20:28.065 "traddr": "10.0.0.2", 00:20:28.065 "trsvcid": "4420" 00:20:28.065 }, 00:20:28.065 "peer_address": { 00:20:28.065 "trtype": "TCP", 00:20:28.065 "adrfam": "IPv4", 00:20:28.065 "traddr": "10.0.0.1", 00:20:28.065 "trsvcid": "37306" 00:20:28.065 }, 00:20:28.065 "auth": { 00:20:28.065 "state": "completed", 00:20:28.065 "digest": "sha256", 00:20:28.065 "dhgroup": "ffdhe6144" 00:20:28.065 } 00:20:28.065 } 00:20:28.065 ]' 00:20:28.065 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.065 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.065 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.323 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.323 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.323 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.323 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.323 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.582 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:28.582 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:29.516 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.516 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.516 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.516 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.516 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.516 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.516 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.516 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.774 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.341 00:20:30.341 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.341 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.341 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.599 { 00:20:30.599 "cntlid": 39, 00:20:30.599 "qid": 0, 00:20:30.599 "state": "enabled", 00:20:30.599 "thread": "nvmf_tgt_poll_group_000", 00:20:30.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.599 "listen_address": { 00:20:30.599 "trtype": "TCP", 00:20:30.599 "adrfam": "IPv4", 00:20:30.599 "traddr": "10.0.0.2", 00:20:30.599 "trsvcid": "4420" 00:20:30.599 }, 00:20:30.599 "peer_address": { 00:20:30.599 "trtype": "TCP", 00:20:30.599 "adrfam": "IPv4", 00:20:30.599 "traddr": "10.0.0.1", 00:20:30.599 "trsvcid": "38784" 00:20:30.599 }, 00:20:30.599 "auth": { 00:20:30.599 "state": "completed", 00:20:30.599 "digest": "sha256", 00:20:30.599 "dhgroup": "ffdhe6144" 00:20:30.599 } 00:20:30.599 } 00:20:30.599 ]' 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.599 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.857 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:30.857 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:31.791 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.791 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.791 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.791 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.791 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.791 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.791 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.791 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.791 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.049 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.982 00:20:32.982 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.982 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.982 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.240 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.240 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.240 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.240 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.240 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.240 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.240 { 00:20:33.240 "cntlid": 41, 00:20:33.240 "qid": 0, 00:20:33.240 "state": "enabled", 00:20:33.240 "thread": "nvmf_tgt_poll_group_000", 00:20:33.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:33.240 "listen_address": { 00:20:33.240 "trtype": "TCP", 00:20:33.240 "adrfam": "IPv4", 00:20:33.240 "traddr": "10.0.0.2", 00:20:33.240 "trsvcid": "4420" 00:20:33.240 }, 00:20:33.240 "peer_address": { 00:20:33.240 "trtype": "TCP", 00:20:33.240 "adrfam": "IPv4", 00:20:33.240 "traddr": "10.0.0.1", 00:20:33.240 "trsvcid": "38818" 00:20:33.240 }, 00:20:33.240 "auth": { 00:20:33.240 "state": "completed", 00:20:33.240 "digest": "sha256", 00:20:33.240 "dhgroup": "ffdhe8192" 00:20:33.240 } 00:20:33.240 } 00:20:33.240 ]' 00:20:33.240 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.240 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.240 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.240 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.240 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.498 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.498 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.498 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.756 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:20:33.756 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:20:34.689 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.689 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.689 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.689 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.689 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.689 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.689 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.689 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.946 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.881 00:20:35.881 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.881 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.881 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.881 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.881 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.881 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.881 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.881 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.881 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.881 { 00:20:35.881 "cntlid": 43, 00:20:35.881 "qid": 0, 00:20:35.881 "state": "enabled", 00:20:35.881 "thread": "nvmf_tgt_poll_group_000", 00:20:35.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:35.881 "listen_address": { 00:20:35.881 "trtype": "TCP", 00:20:35.881 "adrfam": "IPv4", 00:20:35.881 "traddr": "10.0.0.2", 00:20:35.881 "trsvcid": "4420" 00:20:35.881 }, 00:20:35.881 "peer_address": { 00:20:35.881 "trtype": "TCP", 00:20:35.881 "adrfam": "IPv4", 00:20:35.881 "traddr": "10.0.0.1", 00:20:35.881 "trsvcid": "38850" 00:20:35.881 }, 00:20:35.881 "auth": { 00:20:35.881 "state": "completed", 00:20:35.881 "digest": "sha256", 00:20:35.881 "dhgroup": "ffdhe8192" 00:20:35.881 } 00:20:35.881 } 00:20:35.881 ]' 00:20:35.881 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.881 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.881 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.139 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.139 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.139 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.139 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.139 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.397 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:20:36.397 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:20:37.332 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.332 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.332 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.332 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.332 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.332 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.332 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.332 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.590 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.524 00:20:38.524 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.524 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.524 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.524 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.524 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.524 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.524 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.524 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.524 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.524 { 00:20:38.524 "cntlid": 45, 00:20:38.524 "qid": 0, 00:20:38.524 "state": "enabled", 00:20:38.524 "thread": "nvmf_tgt_poll_group_000", 00:20:38.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:38.524 "listen_address": { 00:20:38.524 "trtype": "TCP", 00:20:38.524 "adrfam": "IPv4", 00:20:38.524 "traddr": "10.0.0.2", 00:20:38.524 "trsvcid": "4420" 00:20:38.524 }, 00:20:38.524 "peer_address": { 00:20:38.524 "trtype": "TCP", 00:20:38.524 "adrfam": "IPv4", 00:20:38.524 "traddr": "10.0.0.1", 00:20:38.524 "trsvcid": "38878" 00:20:38.524 }, 00:20:38.524 "auth": { 00:20:38.524 "state": "completed", 00:20:38.524 "digest": "sha256", 00:20:38.524 "dhgroup": "ffdhe8192" 00:20:38.524 } 00:20:38.524 } 00:20:38.524 ]' 00:20:38.524 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.524 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.524 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.782 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.782 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.782 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.782 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.782 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.040 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:39.040 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:39.975 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.975 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.975 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.975 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.975 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.975 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.975 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.975 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.233 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.167 00:20:41.167 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.167 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.167 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.167 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.167 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.167 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.167 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.167 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.167 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.167 { 00:20:41.167 "cntlid": 47, 00:20:41.167 "qid": 0, 00:20:41.167 "state": "enabled", 00:20:41.167 "thread": "nvmf_tgt_poll_group_000", 00:20:41.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.167 "listen_address": { 00:20:41.167 "trtype": "TCP", 00:20:41.167 "adrfam": "IPv4", 00:20:41.167 "traddr": "10.0.0.2", 00:20:41.167 "trsvcid": "4420" 00:20:41.167 }, 00:20:41.167 "peer_address": { 00:20:41.167 "trtype": "TCP", 00:20:41.167 "adrfam": "IPv4", 00:20:41.167 "traddr": "10.0.0.1", 00:20:41.167 "trsvcid": "46636" 00:20:41.167 }, 00:20:41.167 "auth": { 00:20:41.167 "state": "completed", 00:20:41.167 "digest": "sha256", 00:20:41.167 "dhgroup": "ffdhe8192" 00:20:41.167 } 00:20:41.167 } 00:20:41.167 ]' 00:20:41.167 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.424 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.424 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.424 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.424 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.424 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.424 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.425 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.682 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:41.682 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:42.616 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.616 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.616 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.616 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.616 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.616 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:42.616 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.616 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.616 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.616 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.873 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.131 00:20:43.131 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.131 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.131 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.389 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.389 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.389 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.389 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.389 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.389 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.389 { 00:20:43.389 "cntlid": 49, 00:20:43.389 "qid": 0, 00:20:43.389 "state": "enabled", 00:20:43.389 "thread": "nvmf_tgt_poll_group_000", 00:20:43.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.389 "listen_address": { 00:20:43.389 "trtype": "TCP", 00:20:43.389 "adrfam": "IPv4", 00:20:43.389 "traddr": "10.0.0.2", 00:20:43.389 "trsvcid": "4420" 00:20:43.389 }, 00:20:43.389 "peer_address": { 00:20:43.389 "trtype": "TCP", 00:20:43.389 "adrfam": "IPv4", 00:20:43.389 "traddr": "10.0.0.1", 00:20:43.389 "trsvcid": "46656" 00:20:43.389 }, 00:20:43.389 "auth": { 00:20:43.389 "state": "completed", 00:20:43.389 "digest": "sha384", 00:20:43.389 "dhgroup": "null" 00:20:43.389 } 00:20:43.389 } 00:20:43.389 ]' 00:20:43.389 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.389 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.389 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.389 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:43.389 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.647 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.647 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.647 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.905 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:20:43.905 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:20:44.839 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.839 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.839 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.839 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.839 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.839 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.839 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:44.839 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.098 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.356 00:20:45.356 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.356 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.356 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.614 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.614 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.614 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.614 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.614 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.614 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.614 { 00:20:45.614 "cntlid": 51, 00:20:45.614 "qid": 0, 00:20:45.614 "state": "enabled", 00:20:45.614 "thread": "nvmf_tgt_poll_group_000", 00:20:45.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.614 "listen_address": { 00:20:45.614 "trtype": "TCP", 00:20:45.614 "adrfam": "IPv4", 00:20:45.614 "traddr": "10.0.0.2", 00:20:45.614 "trsvcid": "4420" 00:20:45.614 }, 00:20:45.614 "peer_address": { 00:20:45.614 "trtype": "TCP", 00:20:45.614 "adrfam": "IPv4", 00:20:45.614 "traddr": "10.0.0.1", 00:20:45.614 "trsvcid": "46678" 00:20:45.614 }, 00:20:45.614 "auth": { 00:20:45.614 "state": "completed", 00:20:45.614 "digest": "sha384", 00:20:45.614 "dhgroup": "null" 00:20:45.614 } 00:20:45.614 } 00:20:45.614 ]' 00:20:45.614 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.614 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.614 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.614 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:45.614 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.872 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.872 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.872 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.130 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:20:46.130 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:20:47.064 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.064 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.064 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.064 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.064 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.064 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.064 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.064 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.323 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.581 00:20:47.581 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.581 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.581 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.839 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.839 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.839 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.839 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.839 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.839 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.839 { 00:20:47.839 "cntlid": 53, 00:20:47.839 "qid": 0, 00:20:47.839 "state": "enabled", 00:20:47.839 "thread": "nvmf_tgt_poll_group_000", 00:20:47.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.839 "listen_address": { 00:20:47.839 "trtype": "TCP", 00:20:47.839 "adrfam": "IPv4", 00:20:47.839 "traddr": "10.0.0.2", 00:20:47.839 "trsvcid": "4420" 00:20:47.839 }, 00:20:47.839 "peer_address": { 00:20:47.839 "trtype": "TCP", 00:20:47.839 "adrfam": "IPv4", 00:20:47.839 "traddr": "10.0.0.1", 00:20:47.839 "trsvcid": "46698" 00:20:47.840 }, 00:20:47.840 "auth": { 00:20:47.840 "state": "completed", 00:20:47.840 "digest": "sha384", 00:20:47.840 "dhgroup": "null" 00:20:47.840 } 00:20:47.840 } 00:20:47.840 ]' 00:20:47.840 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.840 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.840 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.840 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:47.840 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.098 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.098 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.098 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.356 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:48.356 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.290 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.856 00:20:49.856 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.856 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.856 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.115 { 00:20:50.115 "cntlid": 55, 00:20:50.115 "qid": 0, 00:20:50.115 "state": "enabled", 00:20:50.115 "thread": "nvmf_tgt_poll_group_000", 00:20:50.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.115 "listen_address": { 00:20:50.115 "trtype": "TCP", 00:20:50.115 "adrfam": "IPv4", 00:20:50.115 "traddr": "10.0.0.2", 00:20:50.115 "trsvcid": "4420" 00:20:50.115 }, 00:20:50.115 "peer_address": { 00:20:50.115 "trtype": "TCP", 00:20:50.115 "adrfam": "IPv4", 00:20:50.115 "traddr": "10.0.0.1", 00:20:50.115 "trsvcid": "48922" 00:20:50.115 }, 00:20:50.115 "auth": { 00:20:50.115 "state": "completed", 00:20:50.115 "digest": "sha384", 00:20:50.115 "dhgroup": "null" 00:20:50.115 } 00:20:50.115 } 00:20:50.115 ]' 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.115 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.373 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:50.373 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:51.308 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.309 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.309 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.309 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.309 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.309 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.309 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.309 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.309 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.567 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.826 00:20:52.084 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.084 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.084 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.342 { 00:20:52.342 "cntlid": 57, 00:20:52.342 "qid": 0, 00:20:52.342 "state": "enabled", 00:20:52.342 "thread": "nvmf_tgt_poll_group_000", 00:20:52.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:52.342 "listen_address": { 00:20:52.342 "trtype": "TCP", 00:20:52.342 "adrfam": "IPv4", 00:20:52.342 "traddr": "10.0.0.2", 00:20:52.342 "trsvcid": "4420" 00:20:52.342 }, 00:20:52.342 "peer_address": { 00:20:52.342 "trtype": "TCP", 00:20:52.342 "adrfam": "IPv4", 00:20:52.342 "traddr": "10.0.0.1", 00:20:52.342 "trsvcid": "48946" 00:20:52.342 }, 00:20:52.342 "auth": { 00:20:52.342 "state": "completed", 00:20:52.342 "digest": "sha384", 00:20:52.342 "dhgroup": "ffdhe2048" 00:20:52.342 } 00:20:52.342 } 00:20:52.342 ]' 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.342 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.600 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:20:52.600 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:20:53.538 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.538 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.538 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.538 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.538 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.539 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.539 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.539 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.797 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.363 00:20:54.363 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.363 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.363 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.622 { 00:20:54.622 "cntlid": 59, 00:20:54.622 "qid": 0, 00:20:54.622 "state": "enabled", 00:20:54.622 "thread": "nvmf_tgt_poll_group_000", 00:20:54.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:54.622 "listen_address": { 00:20:54.622 "trtype": "TCP", 00:20:54.622 "adrfam": "IPv4", 00:20:54.622 "traddr": "10.0.0.2", 00:20:54.622 "trsvcid": "4420" 00:20:54.622 }, 00:20:54.622 "peer_address": { 00:20:54.622 "trtype": "TCP", 00:20:54.622 "adrfam": "IPv4", 00:20:54.622 "traddr": "10.0.0.1", 00:20:54.622 "trsvcid": "48980" 00:20:54.622 }, 00:20:54.622 "auth": { 00:20:54.622 "state": "completed", 00:20:54.622 "digest": "sha384", 00:20:54.622 "dhgroup": "ffdhe2048" 00:20:54.622 } 00:20:54.622 } 00:20:54.622 ]' 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.622 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.880 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:20:54.881 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:20:55.848 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.848 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.848 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.848 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.849 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.849 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.849 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:55.849 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.126 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.400 00:20:56.400 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.400 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.400 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.674 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.674 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.674 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.674 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.674 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.674 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.674 { 00:20:56.674 "cntlid": 61, 00:20:56.674 "qid": 0, 00:20:56.674 "state": "enabled", 00:20:56.674 "thread": "nvmf_tgt_poll_group_000", 00:20:56.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:56.674 "listen_address": { 00:20:56.674 "trtype": "TCP", 00:20:56.674 "adrfam": "IPv4", 00:20:56.674 "traddr": "10.0.0.2", 00:20:56.674 "trsvcid": "4420" 00:20:56.674 }, 00:20:56.674 "peer_address": { 00:20:56.674 "trtype": "TCP", 00:20:56.674 "adrfam": "IPv4", 00:20:56.674 "traddr": "10.0.0.1", 00:20:56.674 "trsvcid": "49006" 00:20:56.674 }, 00:20:56.674 "auth": { 00:20:56.674 "state": "completed", 00:20:56.674 "digest": "sha384", 00:20:56.674 "dhgroup": "ffdhe2048" 00:20:56.674 } 00:20:56.674 } 00:20:56.674 ]' 00:20:56.674 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.953 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.953 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.954 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.954 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.954 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.954 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.954 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.230 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:57.230 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:20:58.204 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.204 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.204 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.204 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.204 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.204 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.204 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.204 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.462 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.719 00:20:58.719 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.719 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.719 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.976 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.976 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.976 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.976 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.976 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.976 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.976 { 00:20:58.976 "cntlid": 63, 00:20:58.976 "qid": 0, 00:20:58.976 "state": "enabled", 00:20:58.976 "thread": "nvmf_tgt_poll_group_000", 00:20:58.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.976 "listen_address": { 00:20:58.976 "trtype": "TCP", 00:20:58.976 "adrfam": "IPv4", 00:20:58.976 "traddr": "10.0.0.2", 00:20:58.976 "trsvcid": "4420" 00:20:58.976 }, 00:20:58.976 "peer_address": { 00:20:58.976 "trtype": "TCP", 00:20:58.976 "adrfam": "IPv4", 00:20:58.976 "traddr": "10.0.0.1", 00:20:58.976 "trsvcid": "49036" 00:20:58.976 }, 00:20:58.976 "auth": { 00:20:58.976 "state": "completed", 00:20:58.976 "digest": "sha384", 00:20:58.976 "dhgroup": "ffdhe2048" 00:20:58.976 } 00:20:58.976 } 00:20:58.976 ]' 00:20:58.976 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.976 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.976 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.976 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:58.976 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.234 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.234 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.234 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.491 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:20:59.491 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:00.423 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.423 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.423 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.423 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.423 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.423 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.423 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.423 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.423 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.681 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.939 00:21:00.939 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.939 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.939 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.196 { 00:21:01.196 "cntlid": 65, 00:21:01.196 "qid": 0, 00:21:01.196 "state": "enabled", 00:21:01.196 "thread": "nvmf_tgt_poll_group_000", 00:21:01.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.196 "listen_address": { 00:21:01.196 "trtype": "TCP", 00:21:01.196 "adrfam": "IPv4", 00:21:01.196 "traddr": "10.0.0.2", 00:21:01.196 "trsvcid": "4420" 00:21:01.196 }, 00:21:01.196 "peer_address": { 00:21:01.196 "trtype": "TCP", 00:21:01.196 "adrfam": "IPv4", 00:21:01.196 "traddr": "10.0.0.1", 00:21:01.196 "trsvcid": "56464" 00:21:01.196 }, 00:21:01.196 "auth": { 00:21:01.196 "state": "completed", 00:21:01.196 "digest": "sha384", 00:21:01.196 "dhgroup": "ffdhe3072" 00:21:01.196 } 00:21:01.196 } 00:21:01.196 ]' 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.196 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.760 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:01.760 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.693 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.258 00:21:03.258 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.258 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.258 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.515 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.515 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.515 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.515 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.515 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.515 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.515 { 00:21:03.515 "cntlid": 67, 00:21:03.515 "qid": 0, 00:21:03.515 "state": "enabled", 00:21:03.515 "thread": "nvmf_tgt_poll_group_000", 00:21:03.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:03.515 "listen_address": { 00:21:03.515 "trtype": "TCP", 00:21:03.515 "adrfam": "IPv4", 00:21:03.515 "traddr": "10.0.0.2", 00:21:03.515 "trsvcid": "4420" 00:21:03.515 }, 00:21:03.515 "peer_address": { 00:21:03.515 "trtype": "TCP", 00:21:03.515 "adrfam": "IPv4", 00:21:03.515 "traddr": "10.0.0.1", 00:21:03.515 "trsvcid": "56500" 00:21:03.515 }, 00:21:03.515 "auth": { 00:21:03.516 "state": "completed", 00:21:03.516 "digest": "sha384", 00:21:03.516 "dhgroup": "ffdhe3072" 00:21:03.516 } 00:21:03.516 } 00:21:03.516 ]' 00:21:03.516 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.516 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.516 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.516 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.516 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.516 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.516 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.516 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.773 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:21:03.773 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:21:04.705 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.705 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.705 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.705 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.705 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.705 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.705 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.705 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.963 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.529 00:21:05.529 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.529 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.529 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.787 { 00:21:05.787 "cntlid": 69, 00:21:05.787 "qid": 0, 00:21:05.787 "state": "enabled", 00:21:05.787 "thread": "nvmf_tgt_poll_group_000", 00:21:05.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:05.787 "listen_address": { 00:21:05.787 "trtype": "TCP", 00:21:05.787 "adrfam": "IPv4", 00:21:05.787 "traddr": "10.0.0.2", 00:21:05.787 "trsvcid": "4420" 00:21:05.787 }, 00:21:05.787 "peer_address": { 00:21:05.787 "trtype": "TCP", 00:21:05.787 "adrfam": "IPv4", 00:21:05.787 "traddr": "10.0.0.1", 00:21:05.787 "trsvcid": "56532" 00:21:05.787 }, 00:21:05.787 "auth": { 00:21:05.787 "state": "completed", 00:21:05.787 "digest": "sha384", 00:21:05.787 "dhgroup": "ffdhe3072" 00:21:05.787 } 00:21:05.787 } 00:21:05.787 ]' 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.787 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.045 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:21:06.045 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:21:06.979 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.979 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.979 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.979 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.979 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.979 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.979 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.979 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.237 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.494 00:21:07.494 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.494 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.494 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.060 { 00:21:08.060 "cntlid": 71, 00:21:08.060 "qid": 0, 00:21:08.060 "state": "enabled", 00:21:08.060 "thread": "nvmf_tgt_poll_group_000", 00:21:08.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.060 "listen_address": { 00:21:08.060 "trtype": "TCP", 00:21:08.060 "adrfam": "IPv4", 00:21:08.060 "traddr": "10.0.0.2", 00:21:08.060 "trsvcid": "4420" 00:21:08.060 }, 00:21:08.060 "peer_address": { 00:21:08.060 "trtype": "TCP", 00:21:08.060 "adrfam": "IPv4", 00:21:08.060 "traddr": "10.0.0.1", 00:21:08.060 "trsvcid": "56560" 00:21:08.060 }, 00:21:08.060 "auth": { 00:21:08.060 "state": "completed", 00:21:08.060 "digest": "sha384", 00:21:08.060 "dhgroup": "ffdhe3072" 00:21:08.060 } 00:21:08.060 } 00:21:08.060 ]' 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.060 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.318 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:08.318 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:09.252 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.252 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.252 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.252 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.252 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.252 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.252 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.252 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.252 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.509 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.767 00:21:10.025 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.025 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.025 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.282 { 00:21:10.282 "cntlid": 73, 00:21:10.282 "qid": 0, 00:21:10.282 "state": "enabled", 00:21:10.282 "thread": "nvmf_tgt_poll_group_000", 00:21:10.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.282 "listen_address": { 00:21:10.282 "trtype": "TCP", 00:21:10.282 "adrfam": "IPv4", 00:21:10.282 "traddr": "10.0.0.2", 00:21:10.282 "trsvcid": "4420" 00:21:10.282 }, 00:21:10.282 "peer_address": { 00:21:10.282 "trtype": "TCP", 00:21:10.282 "adrfam": "IPv4", 00:21:10.282 "traddr": "10.0.0.1", 00:21:10.282 "trsvcid": "35982" 00:21:10.282 }, 00:21:10.282 "auth": { 00:21:10.282 "state": "completed", 00:21:10.282 "digest": "sha384", 00:21:10.282 "dhgroup": "ffdhe4096" 00:21:10.282 } 00:21:10.282 } 00:21:10.282 ]' 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.282 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.540 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:10.540 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:11.476 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.476 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.476 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.476 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.476 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.476 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.476 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.476 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.734 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.300 00:21:12.300 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.300 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.300 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.300 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.300 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.300 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.300 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.300 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.300 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.300 { 00:21:12.300 "cntlid": 75, 00:21:12.300 "qid": 0, 00:21:12.300 "state": "enabled", 00:21:12.300 "thread": "nvmf_tgt_poll_group_000", 00:21:12.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.300 "listen_address": { 00:21:12.300 "trtype": "TCP", 00:21:12.300 "adrfam": "IPv4", 00:21:12.300 "traddr": "10.0.0.2", 00:21:12.300 "trsvcid": "4420" 00:21:12.300 }, 00:21:12.300 "peer_address": { 00:21:12.300 "trtype": "TCP", 00:21:12.300 "adrfam": "IPv4", 00:21:12.300 "traddr": "10.0.0.1", 00:21:12.300 "trsvcid": "36000" 00:21:12.300 }, 00:21:12.300 "auth": { 00:21:12.300 "state": "completed", 00:21:12.300 "digest": "sha384", 00:21:12.300 "dhgroup": "ffdhe4096" 00:21:12.300 } 00:21:12.300 } 00:21:12.300 ]' 00:21:12.300 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.558 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.558 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.558 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:12.558 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.558 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.558 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.558 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.815 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:21:12.815 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:21:13.756 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.756 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.756 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.756 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.756 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.756 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.756 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:13.757 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.016 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.581 00:21:14.582 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.582 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.582 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.840 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.840 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.840 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.840 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.840 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.840 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.840 { 00:21:14.840 "cntlid": 77, 00:21:14.840 "qid": 0, 00:21:14.840 "state": "enabled", 00:21:14.840 "thread": "nvmf_tgt_poll_group_000", 00:21:14.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.840 "listen_address": { 00:21:14.840 "trtype": "TCP", 00:21:14.840 "adrfam": "IPv4", 00:21:14.840 "traddr": "10.0.0.2", 00:21:14.840 "trsvcid": "4420" 00:21:14.840 }, 00:21:14.840 "peer_address": { 00:21:14.840 "trtype": "TCP", 00:21:14.840 "adrfam": "IPv4", 00:21:14.840 "traddr": "10.0.0.1", 00:21:14.840 "trsvcid": "36030" 00:21:14.840 }, 00:21:14.840 "auth": { 00:21:14.840 "state": "completed", 00:21:14.840 "digest": "sha384", 00:21:14.840 "dhgroup": "ffdhe4096" 00:21:14.840 } 00:21:14.840 } 00:21:14.840 ]' 00:21:14.840 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.840 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.840 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.840 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.840 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.840 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.840 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.840 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.098 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:21:15.098 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:21:16.030 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.031 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.031 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.031 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.031 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.031 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.031 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.031 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.288 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:16.288 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.288 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.289 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.289 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.289 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.289 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:16.289 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.289 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.289 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.289 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.289 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.289 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.856 00:21:16.856 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.856 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.856 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.114 { 00:21:17.114 "cntlid": 79, 00:21:17.114 "qid": 0, 00:21:17.114 "state": "enabled", 00:21:17.114 "thread": "nvmf_tgt_poll_group_000", 00:21:17.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.114 "listen_address": { 00:21:17.114 "trtype": "TCP", 00:21:17.114 "adrfam": "IPv4", 00:21:17.114 "traddr": "10.0.0.2", 00:21:17.114 "trsvcid": "4420" 00:21:17.114 }, 00:21:17.114 "peer_address": { 00:21:17.114 "trtype": "TCP", 00:21:17.114 "adrfam": "IPv4", 00:21:17.114 "traddr": "10.0.0.1", 00:21:17.114 "trsvcid": "36058" 00:21:17.114 }, 00:21:17.114 "auth": { 00:21:17.114 "state": "completed", 00:21:17.114 "digest": "sha384", 00:21:17.114 "dhgroup": "ffdhe4096" 00:21:17.114 } 00:21:17.114 } 00:21:17.114 ]' 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.114 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.371 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:17.371 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:18.304 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.304 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.304 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.304 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.304 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.305 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.305 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.305 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.305 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.563 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.128 00:21:19.128 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.128 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.128 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.386 { 00:21:19.386 "cntlid": 81, 00:21:19.386 "qid": 0, 00:21:19.386 "state": "enabled", 00:21:19.386 "thread": "nvmf_tgt_poll_group_000", 00:21:19.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.386 "listen_address": { 00:21:19.386 "trtype": "TCP", 00:21:19.386 "adrfam": "IPv4", 00:21:19.386 "traddr": "10.0.0.2", 00:21:19.386 "trsvcid": "4420" 00:21:19.386 }, 00:21:19.386 "peer_address": { 00:21:19.386 "trtype": "TCP", 00:21:19.386 "adrfam": "IPv4", 00:21:19.386 "traddr": "10.0.0.1", 00:21:19.386 "trsvcid": "45726" 00:21:19.386 }, 00:21:19.386 "auth": { 00:21:19.386 "state": "completed", 00:21:19.386 "digest": "sha384", 00:21:19.386 "dhgroup": "ffdhe6144" 00:21:19.386 } 00:21:19.386 } 00:21:19.386 ]' 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.386 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.952 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:19.952 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:20.885 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.885 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.885 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.885 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.885 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.885 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.885 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.885 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.885 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:20.885 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.885 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:20.885 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:20.885 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:20.885 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.886 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.886 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.886 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.886 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.886 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.886 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.886 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.818 00:21:21.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.818 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.818 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.818 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.818 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.818 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.818 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.818 { 00:21:21.818 "cntlid": 83, 00:21:21.818 "qid": 0, 00:21:21.818 "state": "enabled", 00:21:21.818 "thread": "nvmf_tgt_poll_group_000", 00:21:21.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.818 "listen_address": { 00:21:21.818 "trtype": "TCP", 00:21:21.818 "adrfam": "IPv4", 00:21:21.818 "traddr": "10.0.0.2", 00:21:21.818 "trsvcid": "4420" 00:21:21.818 }, 00:21:21.818 "peer_address": { 00:21:21.818 "trtype": "TCP", 00:21:21.818 "adrfam": "IPv4", 00:21:21.818 "traddr": "10.0.0.1", 00:21:21.818 "trsvcid": "45750" 00:21:21.818 }, 00:21:21.818 "auth": { 00:21:21.818 "state": "completed", 00:21:21.818 "digest": "sha384", 00:21:21.818 "dhgroup": "ffdhe6144" 00:21:21.818 } 00:21:21.818 } 00:21:21.818 ]' 00:21:21.818 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.075 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.075 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.075 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.075 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.075 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.075 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.075 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.333 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:21:22.333 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:21:23.265 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.265 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.265 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.266 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.266 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.266 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.266 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.266 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.523 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.089 00:21:24.089 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.089 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.089 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.347 { 00:21:24.347 "cntlid": 85, 00:21:24.347 "qid": 0, 00:21:24.347 "state": "enabled", 00:21:24.347 "thread": "nvmf_tgt_poll_group_000", 00:21:24.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.347 "listen_address": { 00:21:24.347 "trtype": "TCP", 00:21:24.347 "adrfam": "IPv4", 00:21:24.347 "traddr": "10.0.0.2", 00:21:24.347 "trsvcid": "4420" 00:21:24.347 }, 00:21:24.347 "peer_address": { 00:21:24.347 "trtype": "TCP", 00:21:24.347 "adrfam": "IPv4", 00:21:24.347 "traddr": "10.0.0.1", 00:21:24.347 "trsvcid": "45768" 00:21:24.347 }, 00:21:24.347 "auth": { 00:21:24.347 "state": "completed", 00:21:24.347 "digest": "sha384", 00:21:24.347 "dhgroup": "ffdhe6144" 00:21:24.347 } 00:21:24.347 } 00:21:24.347 ]' 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.605 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:21:24.605 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:21:25.539 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.539 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.539 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.539 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.539 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.539 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.539 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.539 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.797 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.362 00:21:26.619 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.619 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.619 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.878 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.878 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.878 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.878 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.878 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.878 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.878 { 00:21:26.878 "cntlid": 87, 00:21:26.878 "qid": 0, 00:21:26.878 "state": "enabled", 00:21:26.878 "thread": "nvmf_tgt_poll_group_000", 00:21:26.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:26.878 "listen_address": { 00:21:26.878 "trtype": "TCP", 00:21:26.878 "adrfam": "IPv4", 00:21:26.878 "traddr": "10.0.0.2", 00:21:26.878 "trsvcid": "4420" 00:21:26.878 }, 00:21:26.878 "peer_address": { 00:21:26.878 "trtype": "TCP", 00:21:26.878 "adrfam": "IPv4", 00:21:26.878 "traddr": "10.0.0.1", 00:21:26.878 "trsvcid": "45796" 00:21:26.878 }, 00:21:26.878 "auth": { 00:21:26.878 "state": "completed", 00:21:26.878 "digest": "sha384", 00:21:26.878 "dhgroup": "ffdhe6144" 00:21:26.878 } 00:21:26.878 } 00:21:26.878 ]' 00:21:26.878 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.878 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.878 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.878 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.878 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.878 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.878 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.878 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.136 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:27.136 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:28.068 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.068 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.068 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.068 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.068 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.068 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.068 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.069 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.069 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.326 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.260 00:21:29.260 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.260 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.260 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.517 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.517 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.517 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.517 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.517 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.517 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.517 { 00:21:29.517 "cntlid": 89, 00:21:29.517 "qid": 0, 00:21:29.517 "state": "enabled", 00:21:29.517 "thread": "nvmf_tgt_poll_group_000", 00:21:29.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.518 "listen_address": { 00:21:29.518 "trtype": "TCP", 00:21:29.518 "adrfam": "IPv4", 00:21:29.518 "traddr": "10.0.0.2", 00:21:29.518 "trsvcid": "4420" 00:21:29.518 }, 00:21:29.518 "peer_address": { 00:21:29.518 "trtype": "TCP", 00:21:29.518 "adrfam": "IPv4", 00:21:29.518 "traddr": "10.0.0.1", 00:21:29.518 "trsvcid": "45824" 00:21:29.518 }, 00:21:29.518 "auth": { 00:21:29.518 "state": "completed", 00:21:29.518 "digest": "sha384", 00:21:29.518 "dhgroup": "ffdhe8192" 00:21:29.518 } 00:21:29.518 } 00:21:29.518 ]' 00:21:29.518 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.518 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.518 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.518 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.518 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.518 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.518 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.518 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.776 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:29.776 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:30.709 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.709 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.709 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.709 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.709 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.709 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.709 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.709 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.967 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.900 00:21:31.900 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.900 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.900 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.158 { 00:21:32.158 "cntlid": 91, 00:21:32.158 "qid": 0, 00:21:32.158 "state": "enabled", 00:21:32.158 "thread": "nvmf_tgt_poll_group_000", 00:21:32.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:32.158 "listen_address": { 00:21:32.158 "trtype": "TCP", 00:21:32.158 "adrfam": "IPv4", 00:21:32.158 "traddr": "10.0.0.2", 00:21:32.158 "trsvcid": "4420" 00:21:32.158 }, 00:21:32.158 "peer_address": { 00:21:32.158 "trtype": "TCP", 00:21:32.158 "adrfam": "IPv4", 00:21:32.158 "traddr": "10.0.0.1", 00:21:32.158 "trsvcid": "52326" 00:21:32.158 }, 00:21:32.158 "auth": { 00:21:32.158 "state": "completed", 00:21:32.158 "digest": "sha384", 00:21:32.158 "dhgroup": "ffdhe8192" 00:21:32.158 } 00:21:32.158 } 00:21:32.158 ]' 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.158 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.416 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:21:32.416 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:21:33.349 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.349 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.349 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.349 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.349 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.349 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.349 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.349 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.914 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.480 00:21:34.480 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.480 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.480 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.045 { 00:21:35.045 "cntlid": 93, 00:21:35.045 "qid": 0, 00:21:35.045 "state": "enabled", 00:21:35.045 "thread": "nvmf_tgt_poll_group_000", 00:21:35.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.045 "listen_address": { 00:21:35.045 "trtype": "TCP", 00:21:35.045 "adrfam": "IPv4", 00:21:35.045 "traddr": "10.0.0.2", 00:21:35.045 "trsvcid": "4420" 00:21:35.045 }, 00:21:35.045 "peer_address": { 00:21:35.045 "trtype": "TCP", 00:21:35.045 "adrfam": "IPv4", 00:21:35.045 "traddr": "10.0.0.1", 00:21:35.045 "trsvcid": "52346" 00:21:35.045 }, 00:21:35.045 "auth": { 00:21:35.045 "state": "completed", 00:21:35.045 "digest": "sha384", 00:21:35.045 "dhgroup": "ffdhe8192" 00:21:35.045 } 00:21:35.045 } 00:21:35.045 ]' 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.045 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.303 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:21:35.303 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:21:36.236 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.236 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.236 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.236 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.236 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.236 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.236 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.236 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.494 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.426 00:21:37.426 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.426 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.426 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.684 { 00:21:37.684 "cntlid": 95, 00:21:37.684 "qid": 0, 00:21:37.684 "state": "enabled", 00:21:37.684 "thread": "nvmf_tgt_poll_group_000", 00:21:37.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:37.684 "listen_address": { 00:21:37.684 "trtype": "TCP", 00:21:37.684 "adrfam": "IPv4", 00:21:37.684 "traddr": "10.0.0.2", 00:21:37.684 "trsvcid": "4420" 00:21:37.684 }, 00:21:37.684 "peer_address": { 00:21:37.684 "trtype": "TCP", 00:21:37.684 "adrfam": "IPv4", 00:21:37.684 "traddr": "10.0.0.1", 00:21:37.684 "trsvcid": "52380" 00:21:37.684 }, 00:21:37.684 "auth": { 00:21:37.684 "state": "completed", 00:21:37.684 "digest": "sha384", 00:21:37.684 "dhgroup": "ffdhe8192" 00:21:37.684 } 00:21:37.684 } 00:21:37.684 ]' 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.684 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.942 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:37.942 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:38.874 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.874 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.874 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.874 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.874 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.874 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:38.874 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.874 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.874 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.874 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.132 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.695 00:21:39.695 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.695 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.695 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.953 { 00:21:39.953 "cntlid": 97, 00:21:39.953 "qid": 0, 00:21:39.953 "state": "enabled", 00:21:39.953 "thread": "nvmf_tgt_poll_group_000", 00:21:39.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:39.953 "listen_address": { 00:21:39.953 "trtype": "TCP", 00:21:39.953 "adrfam": "IPv4", 00:21:39.953 "traddr": "10.0.0.2", 00:21:39.953 "trsvcid": "4420" 00:21:39.953 }, 00:21:39.953 "peer_address": { 00:21:39.953 "trtype": "TCP", 00:21:39.953 "adrfam": "IPv4", 00:21:39.953 "traddr": "10.0.0.1", 00:21:39.953 "trsvcid": "41074" 00:21:39.953 }, 00:21:39.953 "auth": { 00:21:39.953 "state": "completed", 00:21:39.953 "digest": "sha512", 00:21:39.953 "dhgroup": "null" 00:21:39.953 } 00:21:39.953 } 00:21:39.953 ]' 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.953 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.211 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:40.211 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:41.144 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.144 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.144 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.144 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.144 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.144 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.144 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.144 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.402 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.967 00:21:41.967 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.967 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.967 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.224 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.224 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.224 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.224 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.224 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.224 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.224 { 00:21:42.224 "cntlid": 99, 00:21:42.224 "qid": 0, 00:21:42.224 "state": "enabled", 00:21:42.224 "thread": "nvmf_tgt_poll_group_000", 00:21:42.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:42.224 "listen_address": { 00:21:42.224 "trtype": "TCP", 00:21:42.224 "adrfam": "IPv4", 00:21:42.224 "traddr": "10.0.0.2", 00:21:42.224 "trsvcid": "4420" 00:21:42.224 }, 00:21:42.224 "peer_address": { 00:21:42.224 "trtype": "TCP", 00:21:42.225 "adrfam": "IPv4", 00:21:42.225 "traddr": "10.0.0.1", 00:21:42.225 "trsvcid": "41100" 00:21:42.225 }, 00:21:42.225 "auth": { 00:21:42.225 "state": "completed", 00:21:42.225 "digest": "sha512", 00:21:42.225 "dhgroup": "null" 00:21:42.225 } 00:21:42.225 } 00:21:42.225 ]' 00:21:42.225 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.225 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.225 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.225 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:42.225 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.225 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.225 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.225 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.483 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:21:42.483 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:21:43.416 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.416 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.416 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.416 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.416 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.416 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.416 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:43.416 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:43.674 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:43.674 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.674 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.674 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:43.674 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:43.674 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.674 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.674 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.674 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.674 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.674 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.675 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.675 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.933 00:21:43.933 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.933 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.933 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.191 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.191 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.191 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.191 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.191 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.191 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.191 { 00:21:44.191 "cntlid": 101, 00:21:44.191 "qid": 0, 00:21:44.191 "state": "enabled", 00:21:44.191 "thread": "nvmf_tgt_poll_group_000", 00:21:44.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.191 "listen_address": { 00:21:44.191 "trtype": "TCP", 00:21:44.191 "adrfam": "IPv4", 00:21:44.191 "traddr": "10.0.0.2", 00:21:44.191 "trsvcid": "4420" 00:21:44.191 }, 00:21:44.191 "peer_address": { 00:21:44.191 "trtype": "TCP", 00:21:44.191 "adrfam": "IPv4", 00:21:44.191 "traddr": "10.0.0.1", 00:21:44.191 "trsvcid": "41128" 00:21:44.191 }, 00:21:44.191 "auth": { 00:21:44.191 "state": "completed", 00:21:44.191 "digest": "sha512", 00:21:44.191 "dhgroup": "null" 00:21:44.191 } 00:21:44.191 } 00:21:44.191 ]' 00:21:44.191 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.448 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.448 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.448 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:44.448 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.448 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.448 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.448 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.707 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:21:44.707 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:21:45.639 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.639 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.639 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.639 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.639 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.639 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.639 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:45.639 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:45.896 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:45.896 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.896 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.896 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:45.896 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:45.896 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.896 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:45.897 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.897 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.897 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.897 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:45.897 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.897 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.462 00:21:46.462 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.462 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.462 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.462 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.462 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.462 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.462 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.720 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.720 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.720 { 00:21:46.720 "cntlid": 103, 00:21:46.720 "qid": 0, 00:21:46.720 "state": "enabled", 00:21:46.720 "thread": "nvmf_tgt_poll_group_000", 00:21:46.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.720 "listen_address": { 00:21:46.720 "trtype": "TCP", 00:21:46.720 "adrfam": "IPv4", 00:21:46.720 "traddr": "10.0.0.2", 00:21:46.720 "trsvcid": "4420" 00:21:46.720 }, 00:21:46.720 "peer_address": { 00:21:46.720 "trtype": "TCP", 00:21:46.720 "adrfam": "IPv4", 00:21:46.720 "traddr": "10.0.0.1", 00:21:46.720 "trsvcid": "41146" 00:21:46.720 }, 00:21:46.720 "auth": { 00:21:46.720 "state": "completed", 00:21:46.720 "digest": "sha512", 00:21:46.720 "dhgroup": "null" 00:21:46.720 } 00:21:46.720 } 00:21:46.720 ]' 00:21:46.720 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.720 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.720 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.720 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:46.720 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.720 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.720 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.720 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.978 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:46.978 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:47.910 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.910 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.910 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.910 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.910 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.910 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.910 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.910 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.910 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.168 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.426 00:21:48.426 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.426 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.426 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.684 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.684 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.684 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.684 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.684 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.684 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.684 { 00:21:48.684 "cntlid": 105, 00:21:48.684 "qid": 0, 00:21:48.684 "state": "enabled", 00:21:48.684 "thread": "nvmf_tgt_poll_group_000", 00:21:48.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:48.684 "listen_address": { 00:21:48.684 "trtype": "TCP", 00:21:48.684 "adrfam": "IPv4", 00:21:48.684 "traddr": "10.0.0.2", 00:21:48.684 "trsvcid": "4420" 00:21:48.684 }, 00:21:48.684 "peer_address": { 00:21:48.684 "trtype": "TCP", 00:21:48.684 "adrfam": "IPv4", 00:21:48.684 "traddr": "10.0.0.1", 00:21:48.684 "trsvcid": "41162" 00:21:48.684 }, 00:21:48.684 "auth": { 00:21:48.684 "state": "completed", 00:21:48.684 "digest": "sha512", 00:21:48.684 "dhgroup": "ffdhe2048" 00:21:48.684 } 00:21:48.684 } 00:21:48.684 ]' 00:21:48.684 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.942 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.942 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.942 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.942 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.942 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.942 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.942 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.200 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:49.200 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:50.132 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.132 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.132 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.132 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.132 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.132 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.132 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.132 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.390 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.648 00:21:50.648 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.648 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.648 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.905 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.905 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.905 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.905 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.905 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.905 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.905 { 00:21:50.905 "cntlid": 107, 00:21:50.905 "qid": 0, 00:21:50.905 "state": "enabled", 00:21:50.905 "thread": "nvmf_tgt_poll_group_000", 00:21:50.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.905 "listen_address": { 00:21:50.905 "trtype": "TCP", 00:21:50.905 "adrfam": "IPv4", 00:21:50.905 "traddr": "10.0.0.2", 00:21:50.905 "trsvcid": "4420" 00:21:50.906 }, 00:21:50.906 "peer_address": { 00:21:50.906 "trtype": "TCP", 00:21:50.906 "adrfam": "IPv4", 00:21:50.906 "traddr": "10.0.0.1", 00:21:50.906 "trsvcid": "48140" 00:21:50.906 }, 00:21:50.906 "auth": { 00:21:50.906 "state": "completed", 00:21:50.906 "digest": "sha512", 00:21:50.906 "dhgroup": "ffdhe2048" 00:21:50.906 } 00:21:50.906 } 00:21:50.906 ]' 00:21:50.906 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.906 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.906 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.163 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:51.163 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.163 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.163 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.163 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.421 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:21:51.421 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:21:52.356 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.356 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.356 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.356 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.356 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.356 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.356 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:52.356 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.616 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.877 00:21:52.877 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.877 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.877 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.136 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.136 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.136 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.136 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.136 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.136 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.136 { 00:21:53.136 "cntlid": 109, 00:21:53.136 "qid": 0, 00:21:53.136 "state": "enabled", 00:21:53.136 "thread": "nvmf_tgt_poll_group_000", 00:21:53.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:53.136 "listen_address": { 00:21:53.136 "trtype": "TCP", 00:21:53.136 "adrfam": "IPv4", 00:21:53.136 "traddr": "10.0.0.2", 00:21:53.136 "trsvcid": "4420" 00:21:53.136 }, 00:21:53.136 "peer_address": { 00:21:53.136 "trtype": "TCP", 00:21:53.136 "adrfam": "IPv4", 00:21:53.136 "traddr": "10.0.0.1", 00:21:53.136 "trsvcid": "48172" 00:21:53.136 }, 00:21:53.136 "auth": { 00:21:53.136 "state": "completed", 00:21:53.136 "digest": "sha512", 00:21:53.136 "dhgroup": "ffdhe2048" 00:21:53.136 } 00:21:53.136 } 00:21:53.136 ]' 00:21:53.136 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.136 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.136 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.394 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:53.394 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.394 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.394 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.394 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.652 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:21:53.652 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:21:54.584 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.584 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.584 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.584 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.584 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.584 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.584 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:54.584 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.843 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.101 00:21:55.101 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.101 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.101 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.359 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.359 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.359 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.359 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.359 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.359 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.359 { 00:21:55.359 "cntlid": 111, 00:21:55.359 "qid": 0, 00:21:55.359 "state": "enabled", 00:21:55.359 "thread": "nvmf_tgt_poll_group_000", 00:21:55.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.359 "listen_address": { 00:21:55.359 "trtype": "TCP", 00:21:55.359 "adrfam": "IPv4", 00:21:55.359 "traddr": "10.0.0.2", 00:21:55.359 "trsvcid": "4420" 00:21:55.359 }, 00:21:55.359 "peer_address": { 00:21:55.359 "trtype": "TCP", 00:21:55.359 "adrfam": "IPv4", 00:21:55.359 "traddr": "10.0.0.1", 00:21:55.359 "trsvcid": "48184" 00:21:55.359 }, 00:21:55.359 "auth": { 00:21:55.359 "state": "completed", 00:21:55.359 "digest": "sha512", 00:21:55.359 "dhgroup": "ffdhe2048" 00:21:55.359 } 00:21:55.360 } 00:21:55.360 ]' 00:21:55.360 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.360 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.360 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.360 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:55.360 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.618 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.618 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.618 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.875 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:55.875 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:21:56.809 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.809 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.809 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.809 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.809 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.809 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.809 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.809 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.809 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.067 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.325 00:21:57.325 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.325 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.325 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.583 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.583 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.583 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.583 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.583 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.583 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.583 { 00:21:57.583 "cntlid": 113, 00:21:57.583 "qid": 0, 00:21:57.583 "state": "enabled", 00:21:57.583 "thread": "nvmf_tgt_poll_group_000", 00:21:57.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.583 "listen_address": { 00:21:57.583 "trtype": "TCP", 00:21:57.583 "adrfam": "IPv4", 00:21:57.583 "traddr": "10.0.0.2", 00:21:57.583 "trsvcid": "4420" 00:21:57.583 }, 00:21:57.583 "peer_address": { 00:21:57.583 "trtype": "TCP", 00:21:57.583 "adrfam": "IPv4", 00:21:57.583 "traddr": "10.0.0.1", 00:21:57.583 "trsvcid": "48210" 00:21:57.583 }, 00:21:57.583 "auth": { 00:21:57.583 "state": "completed", 00:21:57.583 "digest": "sha512", 00:21:57.583 "dhgroup": "ffdhe3072" 00:21:57.583 } 00:21:57.583 } 00:21:57.583 ]' 00:21:57.583 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.840 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.840 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.840 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:57.840 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.841 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.841 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.841 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.099 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:58.099 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:21:59.032 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.032 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.032 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.032 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.032 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.032 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.032 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.032 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.291 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.548 00:21:59.548 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.548 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.548 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.806 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.806 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.806 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.806 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.806 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.806 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.806 { 00:21:59.806 "cntlid": 115, 00:21:59.806 "qid": 0, 00:21:59.806 "state": "enabled", 00:21:59.806 "thread": "nvmf_tgt_poll_group_000", 00:21:59.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:59.806 "listen_address": { 00:21:59.806 "trtype": "TCP", 00:21:59.806 "adrfam": "IPv4", 00:21:59.806 "traddr": "10.0.0.2", 00:21:59.806 "trsvcid": "4420" 00:21:59.806 }, 00:21:59.806 "peer_address": { 00:21:59.806 "trtype": "TCP", 00:21:59.806 "adrfam": "IPv4", 00:21:59.806 "traddr": "10.0.0.1", 00:21:59.806 "trsvcid": "33746" 00:21:59.806 }, 00:21:59.806 "auth": { 00:21:59.806 "state": "completed", 00:21:59.806 "digest": "sha512", 00:21:59.806 "dhgroup": "ffdhe3072" 00:21:59.806 } 00:21:59.806 } 00:21:59.806 ]' 00:21:59.806 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.064 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.064 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.064 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:00.064 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.064 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.064 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.064 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.322 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:22:00.322 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:22:01.255 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.255 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.255 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.255 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.255 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.255 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.255 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.255 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.077 00:22:02.077 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.077 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.077 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.077 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.077 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.077 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.077 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.334 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.334 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.334 { 00:22:02.334 "cntlid": 117, 00:22:02.334 "qid": 0, 00:22:02.334 "state": "enabled", 00:22:02.334 "thread": "nvmf_tgt_poll_group_000", 00:22:02.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.334 "listen_address": { 00:22:02.334 "trtype": "TCP", 00:22:02.334 "adrfam": "IPv4", 00:22:02.335 "traddr": "10.0.0.2", 00:22:02.335 "trsvcid": "4420" 00:22:02.335 }, 00:22:02.335 "peer_address": { 00:22:02.335 "trtype": "TCP", 00:22:02.335 "adrfam": "IPv4", 00:22:02.335 "traddr": "10.0.0.1", 00:22:02.335 "trsvcid": "33772" 00:22:02.335 }, 00:22:02.335 "auth": { 00:22:02.335 "state": "completed", 00:22:02.335 "digest": "sha512", 00:22:02.335 "dhgroup": "ffdhe3072" 00:22:02.335 } 00:22:02.335 } 00:22:02.335 ]' 00:22:02.335 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.335 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.335 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.335 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:02.335 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.335 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.335 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.335 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.592 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:22:02.592 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:22:03.525 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.525 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.525 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.525 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.525 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.525 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.525 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:03.525 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.783 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.349 00:22:04.349 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.349 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.349 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.607 { 00:22:04.607 "cntlid": 119, 00:22:04.607 "qid": 0, 00:22:04.607 "state": "enabled", 00:22:04.607 "thread": "nvmf_tgt_poll_group_000", 00:22:04.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.607 "listen_address": { 00:22:04.607 "trtype": "TCP", 00:22:04.607 "adrfam": "IPv4", 00:22:04.607 "traddr": "10.0.0.2", 00:22:04.607 "trsvcid": "4420" 00:22:04.607 }, 00:22:04.607 "peer_address": { 00:22:04.607 "trtype": "TCP", 00:22:04.607 "adrfam": "IPv4", 00:22:04.607 "traddr": "10.0.0.1", 00:22:04.607 "trsvcid": "33784" 00:22:04.607 }, 00:22:04.607 "auth": { 00:22:04.607 "state": "completed", 00:22:04.607 "digest": "sha512", 00:22:04.607 "dhgroup": "ffdhe3072" 00:22:04.607 } 00:22:04.607 } 00:22:04.607 ]' 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.607 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.865 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:22:04.865 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:22:05.798 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.798 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.798 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.798 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.798 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.798 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:05.798 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.798 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.798 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.056 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.621 00:22:06.621 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.621 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.621 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.621 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.621 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.621 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.621 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.621 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.621 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.621 { 00:22:06.621 "cntlid": 121, 00:22:06.621 "qid": 0, 00:22:06.621 "state": "enabled", 00:22:06.621 "thread": "nvmf_tgt_poll_group_000", 00:22:06.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.621 "listen_address": { 00:22:06.621 "trtype": "TCP", 00:22:06.621 "adrfam": "IPv4", 00:22:06.621 "traddr": "10.0.0.2", 00:22:06.621 "trsvcid": "4420" 00:22:06.621 }, 00:22:06.621 "peer_address": { 00:22:06.621 "trtype": "TCP", 00:22:06.621 "adrfam": "IPv4", 00:22:06.621 "traddr": "10.0.0.1", 00:22:06.621 "trsvcid": "33814" 00:22:06.621 }, 00:22:06.621 "auth": { 00:22:06.621 "state": "completed", 00:22:06.621 "digest": "sha512", 00:22:06.621 "dhgroup": "ffdhe4096" 00:22:06.621 } 00:22:06.621 } 00:22:06.621 ]' 00:22:06.621 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.882 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.882 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.882 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:06.882 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.882 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.882 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.882 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.140 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:22:07.140 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:22:08.073 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.073 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.073 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.073 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.073 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.073 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.073 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:08.073 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.331 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.589 00:22:08.589 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.589 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.589 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.846 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.846 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.846 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.846 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.846 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.846 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.846 { 00:22:08.846 "cntlid": 123, 00:22:08.846 "qid": 0, 00:22:08.846 "state": "enabled", 00:22:08.846 "thread": "nvmf_tgt_poll_group_000", 00:22:08.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.846 "listen_address": { 00:22:08.846 "trtype": "TCP", 00:22:08.846 "adrfam": "IPv4", 00:22:08.846 "traddr": "10.0.0.2", 00:22:08.846 "trsvcid": "4420" 00:22:08.846 }, 00:22:08.846 "peer_address": { 00:22:08.846 "trtype": "TCP", 00:22:08.846 "adrfam": "IPv4", 00:22:08.846 "traddr": "10.0.0.1", 00:22:08.846 "trsvcid": "33828" 00:22:08.846 }, 00:22:08.846 "auth": { 00:22:08.846 "state": "completed", 00:22:08.846 "digest": "sha512", 00:22:08.846 "dhgroup": "ffdhe4096" 00:22:08.846 } 00:22:08.846 } 00:22:08.846 ]' 00:22:08.846 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.104 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.104 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.104 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:09.104 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.104 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.104 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.104 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.362 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:22:09.362 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:22:10.295 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.295 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.295 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.295 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.295 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.295 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.295 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.295 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.553 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.118 00:22:11.118 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.118 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.118 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.118 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.118 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.118 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.118 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.118 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.118 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.118 { 00:22:11.118 "cntlid": 125, 00:22:11.118 "qid": 0, 00:22:11.118 "state": "enabled", 00:22:11.118 "thread": "nvmf_tgt_poll_group_000", 00:22:11.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.118 "listen_address": { 00:22:11.118 "trtype": "TCP", 00:22:11.118 "adrfam": "IPv4", 00:22:11.118 "traddr": "10.0.0.2", 00:22:11.118 "trsvcid": "4420" 00:22:11.118 }, 00:22:11.118 "peer_address": { 00:22:11.118 "trtype": "TCP", 00:22:11.118 "adrfam": "IPv4", 00:22:11.118 "traddr": "10.0.0.1", 00:22:11.118 "trsvcid": "52254" 00:22:11.118 }, 00:22:11.118 "auth": { 00:22:11.118 "state": "completed", 00:22:11.118 "digest": "sha512", 00:22:11.118 "dhgroup": "ffdhe4096" 00:22:11.118 } 00:22:11.118 } 00:22:11.118 ]' 00:22:11.118 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.376 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.376 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.376 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:11.376 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.376 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.376 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.376 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.633 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:22:11.633 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:22:12.566 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.566 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.566 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.566 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.566 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.566 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.566 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:12.566 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.825 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.086 00:22:13.086 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.086 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.086 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.343 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.343 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.343 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.343 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.343 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.343 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.343 { 00:22:13.343 "cntlid": 127, 00:22:13.343 "qid": 0, 00:22:13.343 "state": "enabled", 00:22:13.343 "thread": "nvmf_tgt_poll_group_000", 00:22:13.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.343 "listen_address": { 00:22:13.343 "trtype": "TCP", 00:22:13.343 "adrfam": "IPv4", 00:22:13.343 "traddr": "10.0.0.2", 00:22:13.343 "trsvcid": "4420" 00:22:13.343 }, 00:22:13.343 "peer_address": { 00:22:13.343 "trtype": "TCP", 00:22:13.343 "adrfam": "IPv4", 00:22:13.343 "traddr": "10.0.0.1", 00:22:13.343 "trsvcid": "52280" 00:22:13.343 }, 00:22:13.343 "auth": { 00:22:13.343 "state": "completed", 00:22:13.343 "digest": "sha512", 00:22:13.343 "dhgroup": "ffdhe4096" 00:22:13.343 } 00:22:13.343 } 00:22:13.343 ]' 00:22:13.343 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.600 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.600 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.600 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:13.600 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.600 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.600 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.600 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.858 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:22:13.858 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:22:14.790 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.790 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.790 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.790 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.790 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.790 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.790 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.790 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:14.790 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:15.048 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:15.048 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.048 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.048 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:15.048 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:15.048 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.048 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.049 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.049 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.049 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.049 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.049 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.049 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.614 00:22:15.614 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.614 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.614 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.871 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.871 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.871 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.871 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.871 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.871 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.871 { 00:22:15.871 "cntlid": 129, 00:22:15.871 "qid": 0, 00:22:15.871 "state": "enabled", 00:22:15.871 "thread": "nvmf_tgt_poll_group_000", 00:22:15.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.871 "listen_address": { 00:22:15.871 "trtype": "TCP", 00:22:15.871 "adrfam": "IPv4", 00:22:15.871 "traddr": "10.0.0.2", 00:22:15.871 "trsvcid": "4420" 00:22:15.871 }, 00:22:15.871 "peer_address": { 00:22:15.871 "trtype": "TCP", 00:22:15.871 "adrfam": "IPv4", 00:22:15.871 "traddr": "10.0.0.1", 00:22:15.871 "trsvcid": "52310" 00:22:15.871 }, 00:22:15.871 "auth": { 00:22:15.871 "state": "completed", 00:22:15.871 "digest": "sha512", 00:22:15.871 "dhgroup": "ffdhe6144" 00:22:15.871 } 00:22:15.871 } 00:22:15.871 ]' 00:22:15.872 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.872 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.872 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.872 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:15.872 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.129 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.129 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.129 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.387 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:22:16.387 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:22:17.321 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.321 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.321 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.321 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.321 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.321 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.321 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:17.321 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.579 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.145 00:22:18.145 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.145 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.145 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.420 { 00:22:18.420 "cntlid": 131, 00:22:18.420 "qid": 0, 00:22:18.420 "state": "enabled", 00:22:18.420 "thread": "nvmf_tgt_poll_group_000", 00:22:18.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:18.420 "listen_address": { 00:22:18.420 "trtype": "TCP", 00:22:18.420 "adrfam": "IPv4", 00:22:18.420 "traddr": "10.0.0.2", 00:22:18.420 "trsvcid": "4420" 00:22:18.420 }, 00:22:18.420 "peer_address": { 00:22:18.420 "trtype": "TCP", 00:22:18.420 "adrfam": "IPv4", 00:22:18.420 "traddr": "10.0.0.1", 00:22:18.420 "trsvcid": "52348" 00:22:18.420 }, 00:22:18.420 "auth": { 00:22:18.420 "state": "completed", 00:22:18.420 "digest": "sha512", 00:22:18.420 "dhgroup": "ffdhe6144" 00:22:18.420 } 00:22:18.420 } 00:22:18.420 ]' 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.420 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.678 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:22:18.678 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:22:19.611 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.611 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.611 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.611 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.611 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.611 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.611 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:19.611 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.869 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.434 00:22:20.434 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.434 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.434 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.692 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.692 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.692 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.692 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.692 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.692 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.692 { 00:22:20.692 "cntlid": 133, 00:22:20.692 "qid": 0, 00:22:20.692 "state": "enabled", 00:22:20.692 "thread": "nvmf_tgt_poll_group_000", 00:22:20.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:20.692 "listen_address": { 00:22:20.692 "trtype": "TCP", 00:22:20.692 "adrfam": "IPv4", 00:22:20.692 "traddr": "10.0.0.2", 00:22:20.692 "trsvcid": "4420" 00:22:20.692 }, 00:22:20.692 "peer_address": { 00:22:20.692 "trtype": "TCP", 00:22:20.692 "adrfam": "IPv4", 00:22:20.692 "traddr": "10.0.0.1", 00:22:20.692 "trsvcid": "59888" 00:22:20.692 }, 00:22:20.692 "auth": { 00:22:20.692 "state": "completed", 00:22:20.692 "digest": "sha512", 00:22:20.692 "dhgroup": "ffdhe6144" 00:22:20.692 } 00:22:20.692 } 00:22:20.692 ]' 00:22:20.693 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.693 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.693 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.693 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:20.693 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.693 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.693 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.693 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.951 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:22:20.951 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:22:21.884 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.884 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.884 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.884 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.884 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.884 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.884 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:21.884 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:22.142 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:22.707 00:22:22.707 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.707 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.707 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.965 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.965 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.965 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.965 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.965 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.965 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.965 { 00:22:22.965 "cntlid": 135, 00:22:22.965 "qid": 0, 00:22:22.965 "state": "enabled", 00:22:22.965 "thread": "nvmf_tgt_poll_group_000", 00:22:22.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:22.965 "listen_address": { 00:22:22.965 "trtype": "TCP", 00:22:22.965 "adrfam": "IPv4", 00:22:22.965 "traddr": "10.0.0.2", 00:22:22.965 "trsvcid": "4420" 00:22:22.965 }, 00:22:22.965 "peer_address": { 00:22:22.965 "trtype": "TCP", 00:22:22.965 "adrfam": "IPv4", 00:22:22.965 "traddr": "10.0.0.1", 00:22:22.965 "trsvcid": "59920" 00:22:22.965 }, 00:22:22.965 "auth": { 00:22:22.965 "state": "completed", 00:22:22.965 "digest": "sha512", 00:22:22.965 "dhgroup": "ffdhe6144" 00:22:22.965 } 00:22:22.965 } 00:22:22.965 ]' 00:22:22.965 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.965 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.965 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.223 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:23.223 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.223 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.223 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.223 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.481 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:22:23.481 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:22:24.413 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.413 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.413 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.413 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.413 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.413 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.413 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.413 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:24.413 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.671 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.603 00:22:25.603 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.603 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.603 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.603 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.603 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.603 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.603 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.603 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.604 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.604 { 00:22:25.604 "cntlid": 137, 00:22:25.604 "qid": 0, 00:22:25.604 "state": "enabled", 00:22:25.604 "thread": "nvmf_tgt_poll_group_000", 00:22:25.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:25.604 "listen_address": { 00:22:25.604 "trtype": "TCP", 00:22:25.604 "adrfam": "IPv4", 00:22:25.604 "traddr": "10.0.0.2", 00:22:25.604 "trsvcid": "4420" 00:22:25.604 }, 00:22:25.604 "peer_address": { 00:22:25.604 "trtype": "TCP", 00:22:25.604 "adrfam": "IPv4", 00:22:25.604 "traddr": "10.0.0.1", 00:22:25.604 "trsvcid": "59942" 00:22:25.604 }, 00:22:25.604 "auth": { 00:22:25.604 "state": "completed", 00:22:25.604 "digest": "sha512", 00:22:25.604 "dhgroup": "ffdhe8192" 00:22:25.604 } 00:22:25.604 } 00:22:25.604 ]' 00:22:25.604 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.861 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.861 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.861 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:25.861 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.861 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.861 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.861 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.119 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:22:26.119 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:22:27.052 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.053 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.053 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.053 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.053 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.053 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.053 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.053 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.310 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.328 00:22:28.328 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.328 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.329 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.329 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.329 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.329 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.329 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.329 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.329 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.329 { 00:22:28.329 "cntlid": 139, 00:22:28.329 "qid": 0, 00:22:28.329 "state": "enabled", 00:22:28.329 "thread": "nvmf_tgt_poll_group_000", 00:22:28.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:28.329 "listen_address": { 00:22:28.329 "trtype": "TCP", 00:22:28.329 "adrfam": "IPv4", 00:22:28.329 "traddr": "10.0.0.2", 00:22:28.329 "trsvcid": "4420" 00:22:28.329 }, 00:22:28.329 "peer_address": { 00:22:28.329 "trtype": "TCP", 00:22:28.329 "adrfam": "IPv4", 00:22:28.329 "traddr": "10.0.0.1", 00:22:28.329 "trsvcid": "59974" 00:22:28.329 }, 00:22:28.329 "auth": { 00:22:28.329 "state": "completed", 00:22:28.329 "digest": "sha512", 00:22:28.329 "dhgroup": "ffdhe8192" 00:22:28.329 } 00:22:28.329 } 00:22:28.329 ]' 00:22:28.329 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.607 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.607 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.607 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:28.607 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.607 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.607 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.607 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.880 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:22:28.880 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: --dhchap-ctrl-secret DHHC-1:02:MjY3MTZiMGFlOWViNDAxNWQ1ZWQ2NmYwZjM1OTllYTY0NDQyYzFmYjIzM2RiZGUyyrjRCQ==: 00:22:29.851 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.851 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.851 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.851 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.851 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.851 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.852 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:29.852 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.109 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.040 00:22:31.040 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.041 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.041 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.298 { 00:22:31.298 "cntlid": 141, 00:22:31.298 "qid": 0, 00:22:31.298 "state": "enabled", 00:22:31.298 "thread": "nvmf_tgt_poll_group_000", 00:22:31.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:31.298 "listen_address": { 00:22:31.298 "trtype": "TCP", 00:22:31.298 "adrfam": "IPv4", 00:22:31.298 "traddr": "10.0.0.2", 00:22:31.298 "trsvcid": "4420" 00:22:31.298 }, 00:22:31.298 "peer_address": { 00:22:31.298 "trtype": "TCP", 00:22:31.298 "adrfam": "IPv4", 00:22:31.298 "traddr": "10.0.0.1", 00:22:31.298 "trsvcid": "55026" 00:22:31.298 }, 00:22:31.298 "auth": { 00:22:31.298 "state": "completed", 00:22:31.298 "digest": "sha512", 00:22:31.298 "dhgroup": "ffdhe8192" 00:22:31.298 } 00:22:31.298 } 00:22:31.298 ]' 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.298 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.556 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:22:31.556 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:01:YmIzYjRkYzJiMzY0YTIzYzM0NWQ5ODIzMzg5NGQzZDjqXfZF: 00:22:32.489 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.489 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.489 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.489 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.489 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.489 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.489 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:32.489 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.747 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.679 00:22:33.679 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.679 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.679 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.936 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.936 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.936 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.936 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.936 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.936 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.936 { 00:22:33.936 "cntlid": 143, 00:22:33.936 "qid": 0, 00:22:33.936 "state": "enabled", 00:22:33.936 "thread": "nvmf_tgt_poll_group_000", 00:22:33.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:33.936 "listen_address": { 00:22:33.936 "trtype": "TCP", 00:22:33.936 "adrfam": "IPv4", 00:22:33.936 "traddr": "10.0.0.2", 00:22:33.936 "trsvcid": "4420" 00:22:33.936 }, 00:22:33.936 "peer_address": { 00:22:33.936 "trtype": "TCP", 00:22:33.936 "adrfam": "IPv4", 00:22:33.936 "traddr": "10.0.0.1", 00:22:33.936 "trsvcid": "55056" 00:22:33.936 }, 00:22:33.936 "auth": { 00:22:33.936 "state": "completed", 00:22:33.936 "digest": "sha512", 00:22:33.936 "dhgroup": "ffdhe8192" 00:22:33.936 } 00:22:33.936 } 00:22:33.936 ]' 00:22:33.936 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.936 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.937 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.937 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:33.937 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.937 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.937 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.937 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.194 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:22:34.194 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:22:35.126 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.126 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.126 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.126 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.126 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.126 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:35.126 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:35.126 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:35.126 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:35.126 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:35.126 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.384 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.316 00:22:36.316 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.316 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.316 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.574 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.574 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.574 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.574 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.574 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.574 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.574 { 00:22:36.574 "cntlid": 145, 00:22:36.574 "qid": 0, 00:22:36.574 "state": "enabled", 00:22:36.574 "thread": "nvmf_tgt_poll_group_000", 00:22:36.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:36.574 "listen_address": { 00:22:36.574 "trtype": "TCP", 00:22:36.574 "adrfam": "IPv4", 00:22:36.574 "traddr": "10.0.0.2", 00:22:36.574 "trsvcid": "4420" 00:22:36.574 }, 00:22:36.574 "peer_address": { 00:22:36.574 "trtype": "TCP", 00:22:36.574 "adrfam": "IPv4", 00:22:36.574 "traddr": "10.0.0.1", 00:22:36.574 "trsvcid": "55088" 00:22:36.574 }, 00:22:36.574 "auth": { 00:22:36.574 "state": "completed", 00:22:36.574 "digest": "sha512", 00:22:36.574 "dhgroup": "ffdhe8192" 00:22:36.574 } 00:22:36.574 } 00:22:36.574 ]' 00:22:36.574 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.574 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.574 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.832 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:36.832 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.832 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.832 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.832 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.089 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:22:37.089 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMzZTVkNmE1MTI1NTMwMTllZTEzMjNmNDk5OTIyOTczMGEyZjJlMDI5MGNmOTMxX2okkA==: --dhchap-ctrl-secret DHHC-1:03:MDgzNTk3ZDliMzI3YzZmMmM1NTkzOGZlZDlmMjk0NDk3MWMyZjVmYjk3NDM2OTFlZmJjY2UxN2Y5NTBlNGYwNAgTCAc=: 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:38.022 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.023 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:38.023 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.023 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:38.023 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:38.023 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:38.956 request: 00:22:38.956 { 00:22:38.956 "name": "nvme0", 00:22:38.956 "trtype": "tcp", 00:22:38.956 "traddr": "10.0.0.2", 00:22:38.956 "adrfam": "ipv4", 00:22:38.956 "trsvcid": "4420", 00:22:38.956 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.956 "prchk_reftag": false, 00:22:38.956 "prchk_guard": false, 00:22:38.956 "hdgst": false, 00:22:38.956 "ddgst": false, 00:22:38.956 "dhchap_key": "key2", 00:22:38.956 "allow_unrecognized_csi": false, 00:22:38.956 "method": "bdev_nvme_attach_controller", 00:22:38.956 "req_id": 1 00:22:38.956 } 00:22:38.956 Got JSON-RPC error response 00:22:38.956 response: 00:22:38.956 { 00:22:38.956 "code": -5, 00:22:38.956 "message": "Input/output error" 00:22:38.956 } 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:38.956 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:39.521 request: 00:22:39.521 { 00:22:39.521 "name": "nvme0", 00:22:39.521 "trtype": "tcp", 00:22:39.521 "traddr": "10.0.0.2", 00:22:39.521 "adrfam": "ipv4", 00:22:39.521 "trsvcid": "4420", 00:22:39.521 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:39.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:39.521 "prchk_reftag": false, 00:22:39.521 "prchk_guard": false, 00:22:39.521 "hdgst": false, 00:22:39.521 "ddgst": false, 00:22:39.521 "dhchap_key": "key1", 00:22:39.521 "dhchap_ctrlr_key": "ckey2", 00:22:39.521 "allow_unrecognized_csi": false, 00:22:39.521 "method": "bdev_nvme_attach_controller", 00:22:39.521 "req_id": 1 00:22:39.521 } 00:22:39.521 Got JSON-RPC error response 00:22:39.521 response: 00:22:39.521 { 00:22:39.521 "code": -5, 00:22:39.521 "message": "Input/output error" 00:22:39.521 } 00:22:39.521 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:39.521 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:39.521 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:39.521 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:39.521 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.521 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.521 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.521 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.521 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.522 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.454 request: 00:22:40.454 { 00:22:40.454 "name": "nvme0", 00:22:40.454 "trtype": "tcp", 00:22:40.454 "traddr": "10.0.0.2", 00:22:40.454 "adrfam": "ipv4", 00:22:40.454 "trsvcid": "4420", 00:22:40.454 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:40.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:40.454 "prchk_reftag": false, 00:22:40.454 "prchk_guard": false, 00:22:40.454 "hdgst": false, 00:22:40.454 "ddgst": false, 00:22:40.454 "dhchap_key": "key1", 00:22:40.454 "dhchap_ctrlr_key": "ckey1", 00:22:40.454 "allow_unrecognized_csi": false, 00:22:40.454 "method": "bdev_nvme_attach_controller", 00:22:40.454 "req_id": 1 00:22:40.454 } 00:22:40.454 Got JSON-RPC error response 00:22:40.454 response: 00:22:40.454 { 00:22:40.454 "code": -5, 00:22:40.454 "message": "Input/output error" 00:22:40.454 } 00:22:40.454 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:40.454 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:40.454 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:40.454 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:40.454 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.454 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.454 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.454 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.454 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 241157 00:22:40.454 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 241157 ']' 00:22:40.454 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 241157 00:22:40.455 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:40.455 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.455 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 241157 00:22:40.455 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:40.455 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:40.455 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 241157' 00:22:40.455 killing process with pid 241157 00:22:40.455 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 241157 00:22:40.455 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 241157 00:22:40.712 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:40.712 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:40.712 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:40.712 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.712 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=264216 00:22:40.712 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:40.712 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 264216 00:22:40.712 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 264216 ']' 00:22:40.713 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.713 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.713 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.713 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.713 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 264216 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 264216 ']' 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.971 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.229 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.229 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:41.229 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:41.229 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.229 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.488 null0 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.WDt 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.BV1 ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BV1 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.PI1 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.2Dv ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2Dv 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.y1q 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.JEc ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JEc 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.7HN 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.488 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.861 nvme0n1 00:22:42.861 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.861 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.861 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.118 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.118 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.118 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.118 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.118 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.118 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.118 { 00:22:43.118 "cntlid": 1, 00:22:43.118 "qid": 0, 00:22:43.118 "state": "enabled", 00:22:43.118 "thread": "nvmf_tgt_poll_group_000", 00:22:43.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:43.118 "listen_address": { 00:22:43.118 "trtype": "TCP", 00:22:43.118 "adrfam": "IPv4", 00:22:43.118 "traddr": "10.0.0.2", 00:22:43.118 "trsvcid": "4420" 00:22:43.118 }, 00:22:43.118 "peer_address": { 00:22:43.118 "trtype": "TCP", 00:22:43.118 "adrfam": "IPv4", 00:22:43.118 "traddr": "10.0.0.1", 00:22:43.118 "trsvcid": "44358" 00:22:43.118 }, 00:22:43.118 "auth": { 00:22:43.118 "state": "completed", 00:22:43.118 "digest": "sha512", 00:22:43.118 "dhgroup": "ffdhe8192" 00:22:43.118 } 00:22:43.118 } 00:22:43.118 ]' 00:22:43.118 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.118 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.118 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.118 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:43.118 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.376 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.376 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.376 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.634 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:22:43.634 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:22:44.567 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.567 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.567 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.567 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.567 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.567 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:44.567 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.567 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.567 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.567 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:44.567 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:44.825 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:44.825 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:44.825 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:44.825 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:44.825 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.825 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:44.825 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.825 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:44.825 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:44.825 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.083 request: 00:22:45.083 { 00:22:45.083 "name": "nvme0", 00:22:45.083 "trtype": "tcp", 00:22:45.083 "traddr": "10.0.0.2", 00:22:45.083 "adrfam": "ipv4", 00:22:45.083 "trsvcid": "4420", 00:22:45.083 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:45.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:45.083 "prchk_reftag": false, 00:22:45.083 "prchk_guard": false, 00:22:45.083 "hdgst": false, 00:22:45.083 "ddgst": false, 00:22:45.083 "dhchap_key": "key3", 00:22:45.083 "allow_unrecognized_csi": false, 00:22:45.083 "method": "bdev_nvme_attach_controller", 00:22:45.083 "req_id": 1 00:22:45.083 } 00:22:45.083 Got JSON-RPC error response 00:22:45.083 response: 00:22:45.083 { 00:22:45.083 "code": -5, 00:22:45.083 "message": "Input/output error" 00:22:45.083 } 00:22:45.083 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:45.083 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.083 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.083 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.083 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:45.083 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:45.083 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:45.083 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:45.341 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:45.341 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:45.341 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:45.341 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:45.341 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.341 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:45.341 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.341 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:45.341 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.341 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.599 request: 00:22:45.599 { 00:22:45.599 "name": "nvme0", 00:22:45.599 "trtype": "tcp", 00:22:45.599 "traddr": "10.0.0.2", 00:22:45.599 "adrfam": "ipv4", 00:22:45.599 "trsvcid": "4420", 00:22:45.599 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:45.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:45.599 "prchk_reftag": false, 00:22:45.599 "prchk_guard": false, 00:22:45.599 "hdgst": false, 00:22:45.599 "ddgst": false, 00:22:45.599 "dhchap_key": "key3", 00:22:45.599 "allow_unrecognized_csi": false, 00:22:45.599 "method": "bdev_nvme_attach_controller", 00:22:45.599 "req_id": 1 00:22:45.599 } 00:22:45.599 Got JSON-RPC error response 00:22:45.599 response: 00:22:45.599 { 00:22:45.599 "code": -5, 00:22:45.599 "message": "Input/output error" 00:22:45.599 } 00:22:45.599 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:45.599 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.599 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.599 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.599 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:45.599 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:45.599 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:45.599 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:45.599 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:45.599 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:45.856 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:45.857 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:46.423 request: 00:22:46.423 { 00:22:46.423 "name": "nvme0", 00:22:46.423 "trtype": "tcp", 00:22:46.423 "traddr": "10.0.0.2", 00:22:46.423 "adrfam": "ipv4", 00:22:46.423 "trsvcid": "4420", 00:22:46.423 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:46.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:46.423 "prchk_reftag": false, 00:22:46.423 "prchk_guard": false, 00:22:46.423 "hdgst": false, 00:22:46.423 "ddgst": false, 00:22:46.423 "dhchap_key": "key0", 00:22:46.423 "dhchap_ctrlr_key": "key1", 00:22:46.423 "allow_unrecognized_csi": false, 00:22:46.423 "method": "bdev_nvme_attach_controller", 00:22:46.423 "req_id": 1 00:22:46.423 } 00:22:46.423 Got JSON-RPC error response 00:22:46.423 response: 00:22:46.423 { 00:22:46.423 "code": -5, 00:22:46.423 "message": "Input/output error" 00:22:46.423 } 00:22:46.423 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:46.423 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:46.423 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:46.423 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:46.423 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:46.423 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:46.423 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:46.681 nvme0n1 00:22:46.681 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:46.681 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:46.681 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.246 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.246 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.246 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.504 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:47.504 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.504 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.504 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.504 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:47.504 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:47.504 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:48.877 nvme0n1 00:22:48.877 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:48.877 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:48.877 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.877 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.877 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:48.877 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.877 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.877 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.135 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:49.135 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:49.135 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.393 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.393 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:22:49.393 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: --dhchap-ctrl-secret DHHC-1:03:YzNiYzBhZjJkZmE1NDQzYjgyMzcwZmUwMzc5ZTQzZmQyYTFjYmVjMmViNTQ3MTQ2NjkzNmI1YWNlMWQ2NGZjNwM16vM=: 00:22:50.326 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:50.326 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:50.326 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:50.326 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:50.326 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:50.326 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:50.326 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:50.326 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.326 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.583 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:50.583 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:50.583 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:50.583 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:50.583 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.583 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:50.583 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.583 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:50.583 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:50.583 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:51.147 request: 00:22:51.147 { 00:22:51.147 "name": "nvme0", 00:22:51.147 "trtype": "tcp", 00:22:51.148 "traddr": "10.0.0.2", 00:22:51.148 "adrfam": "ipv4", 00:22:51.148 "trsvcid": "4420", 00:22:51.148 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:51.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:51.148 "prchk_reftag": false, 00:22:51.148 "prchk_guard": false, 00:22:51.148 "hdgst": false, 00:22:51.148 "ddgst": false, 00:22:51.148 "dhchap_key": "key1", 00:22:51.148 "allow_unrecognized_csi": false, 00:22:51.148 "method": "bdev_nvme_attach_controller", 00:22:51.148 "req_id": 1 00:22:51.148 } 00:22:51.148 Got JSON-RPC error response 00:22:51.148 response: 00:22:51.148 { 00:22:51.148 "code": -5, 00:22:51.148 "message": "Input/output error" 00:22:51.148 } 00:22:51.405 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:51.405 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.405 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.405 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.406 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:51.406 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:51.406 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.780 nvme0n1 00:22:52.780 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:52.780 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:52.780 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.780 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.780 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.780 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.348 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.348 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.348 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.348 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.348 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:53.348 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:53.348 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:53.605 nvme0n1 00:22:53.605 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:53.605 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:53.605 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.863 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.863 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.863 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: '' 2s 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: ]] 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzY0MDQ1NjJiM2ZkMWQ2OWIyOTlkN2RjNmIzZjhmMjILhsGP: 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:54.121 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:56.020 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:56.020 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:56.020 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:56.020 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:56.020 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:56.020 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: 2s 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: ]] 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NDdiODY4NjhmNmQyZmE1MDQ0NmRlOTUwOGU5YTYwZGE1YTZjNDRkMDc5MjNjYWI1P7QJ1g==: 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:56.278 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:58.176 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:59.548 nvme0n1 00:22:59.548 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:59.548 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.548 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.548 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.548 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:59.548 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:00.480 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:00.480 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:00.480 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.738 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.738 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.738 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.738 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.738 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.738 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:00.738 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:00.996 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:00.996 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:00.996 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:01.253 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:02.187 request: 00:23:02.187 { 00:23:02.187 "name": "nvme0", 00:23:02.187 "dhchap_key": "key1", 00:23:02.187 "dhchap_ctrlr_key": "key3", 00:23:02.187 "method": "bdev_nvme_set_keys", 00:23:02.187 "req_id": 1 00:23:02.187 } 00:23:02.187 Got JSON-RPC error response 00:23:02.187 response: 00:23:02.187 { 00:23:02.187 "code": -13, 00:23:02.187 "message": "Permission denied" 00:23:02.187 } 00:23:02.187 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:02.187 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:02.187 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:02.187 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:02.187 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:02.187 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:02.187 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.187 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:02.187 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:03.570 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:03.570 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:03.570 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.570 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:03.570 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:03.570 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.570 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.570 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.570 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:03.570 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:03.570 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:04.943 nvme0n1 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:04.943 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:05.875 request: 00:23:05.875 { 00:23:05.875 "name": "nvme0", 00:23:05.875 "dhchap_key": "key2", 00:23:05.875 "dhchap_ctrlr_key": "key0", 00:23:05.875 "method": "bdev_nvme_set_keys", 00:23:05.875 "req_id": 1 00:23:05.875 } 00:23:05.875 Got JSON-RPC error response 00:23:05.875 response: 00:23:05.875 { 00:23:05.875 "code": -13, 00:23:05.875 "message": "Permission denied" 00:23:05.875 } 00:23:05.875 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:05.875 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:05.875 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:05.875 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:05.875 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:05.875 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.875 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:06.132 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:06.132 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:07.065 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:07.065 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:07.065 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 241297 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 241297 ']' 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 241297 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 241297 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 241297' 00:23:07.324 killing process with pid 241297 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 241297 00:23:07.324 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 241297 00:23:07.889 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:07.889 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:07.889 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:07.889 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:07.889 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:07.889 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:07.889 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:07.889 rmmod nvme_tcp 00:23:07.889 rmmod nvme_fabrics 00:23:07.889 rmmod nvme_keyring 00:23:07.889 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:07.889 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:07.889 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:07.889 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 264216 ']' 00:23:07.889 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 264216 00:23:07.889 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 264216 ']' 00:23:07.889 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 264216 00:23:07.889 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:07.889 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:07.889 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 264216 00:23:07.889 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:07.889 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:07.889 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 264216' 00:23:07.890 killing process with pid 264216 00:23:07.890 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 264216 00:23:07.890 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 264216 00:23:08.148 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:08.148 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:08.148 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:08.148 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:08.148 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:23:08.148 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:08.148 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:23:08.148 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:08.148 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:08.148 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.148 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.148 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.056 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:10.056 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.WDt /tmp/spdk.key-sha256.PI1 /tmp/spdk.key-sha384.y1q /tmp/spdk.key-sha512.7HN /tmp/spdk.key-sha512.BV1 /tmp/spdk.key-sha384.2Dv /tmp/spdk.key-sha256.JEc '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:10.056 00:23:10.056 real 3m32.606s 00:23:10.056 user 8m17.076s 00:23:10.056 sys 0m28.239s 00:23:10.056 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:10.056 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.056 ************************************ 00:23:10.056 END TEST nvmf_auth_target 00:23:10.056 ************************************ 00:23:10.056 22:46:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:10.056 22:46:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:10.056 22:46:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:10.056 22:46:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:10.056 22:46:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:10.315 ************************************ 00:23:10.315 START TEST nvmf_bdevio_no_huge 00:23:10.315 ************************************ 00:23:10.315 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:10.315 * Looking for test storage... 00:23:10.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:10.315 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:10.315 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:23:10.315 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:10.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.316 --rc genhtml_branch_coverage=1 00:23:10.316 --rc genhtml_function_coverage=1 00:23:10.316 --rc genhtml_legend=1 00:23:10.316 --rc geninfo_all_blocks=1 00:23:10.316 --rc geninfo_unexecuted_blocks=1 00:23:10.316 00:23:10.316 ' 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:10.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.316 --rc genhtml_branch_coverage=1 00:23:10.316 --rc genhtml_function_coverage=1 00:23:10.316 --rc genhtml_legend=1 00:23:10.316 --rc geninfo_all_blocks=1 00:23:10.316 --rc geninfo_unexecuted_blocks=1 00:23:10.316 00:23:10.316 ' 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:10.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.316 --rc genhtml_branch_coverage=1 00:23:10.316 --rc genhtml_function_coverage=1 00:23:10.316 --rc genhtml_legend=1 00:23:10.316 --rc geninfo_all_blocks=1 00:23:10.316 --rc geninfo_unexecuted_blocks=1 00:23:10.316 00:23:10.316 ' 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:10.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.316 --rc genhtml_branch_coverage=1 00:23:10.316 --rc genhtml_function_coverage=1 00:23:10.316 --rc genhtml_legend=1 00:23:10.316 --rc geninfo_all_blocks=1 00:23:10.316 --rc geninfo_unexecuted_blocks=1 00:23:10.316 00:23:10.316 ' 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:10.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:10.316 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:10.317 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:10.317 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.317 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.317 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.317 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:10.317 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:10.317 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:10.317 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:12.846 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:12.846 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:12.846 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:12.847 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:12.847 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:12.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:23:12.847 00:23:12.847 --- 10.0.0.2 ping statistics --- 00:23:12.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.847 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:23:12.847 00:23:12.847 --- 10.0.0.1 ping statistics --- 00:23:12.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.847 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=269453 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 269453 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 269453 ']' 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:12.847 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.847 [2024-10-11 22:46:15.917344] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:23:12.847 [2024-10-11 22:46:15.917442] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:12.847 [2024-10-11 22:46:15.988485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.847 [2024-10-11 22:46:16.035093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.847 [2024-10-11 22:46:16.035146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.847 [2024-10-11 22:46:16.035160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.847 [2024-10-11 22:46:16.035170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.847 [2024-10-11 22:46:16.035179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.847 [2024-10-11 22:46:16.036306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:12.847 [2024-10-11 22:46:16.036358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:12.847 [2024-10-11 22:46:16.036407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:12.847 [2024-10-11 22:46:16.036409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:13.105 [2024-10-11 22:46:16.182637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:13.105 Malloc0 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:13.105 [2024-10-11 22:46:16.220411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:13.105 { 00:23:13.105 "params": { 00:23:13.105 "name": "Nvme$subsystem", 00:23:13.105 "trtype": "$TEST_TRANSPORT", 00:23:13.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.105 "adrfam": "ipv4", 00:23:13.105 "trsvcid": "$NVMF_PORT", 00:23:13.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.105 "hdgst": ${hdgst:-false}, 00:23:13.105 "ddgst": ${ddgst:-false} 00:23:13.105 }, 00:23:13.105 "method": "bdev_nvme_attach_controller" 00:23:13.105 } 00:23:13.105 EOF 00:23:13.105 )") 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:23:13.105 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:13.105 "params": { 00:23:13.105 "name": "Nvme1", 00:23:13.105 "trtype": "tcp", 00:23:13.105 "traddr": "10.0.0.2", 00:23:13.105 "adrfam": "ipv4", 00:23:13.105 "trsvcid": "4420", 00:23:13.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.105 "hdgst": false, 00:23:13.105 "ddgst": false 00:23:13.105 }, 00:23:13.105 "method": "bdev_nvme_attach_controller" 00:23:13.105 }' 00:23:13.105 [2024-10-11 22:46:16.267087] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:23:13.105 [2024-10-11 22:46:16.267162] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid269484 ] 00:23:13.105 [2024-10-11 22:46:16.330967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:13.363 [2024-10-11 22:46:16.379375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.363 [2024-10-11 22:46:16.379428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.363 [2024-10-11 22:46:16.379431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.363 I/O targets: 00:23:13.363 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:13.363 00:23:13.363 00:23:13.363 CUnit - A unit testing framework for C - Version 2.1-3 00:23:13.363 http://cunit.sourceforge.net/ 00:23:13.363 00:23:13.363 00:23:13.363 Suite: bdevio tests on: Nvme1n1 00:23:13.621 Test: blockdev write read block ...passed 00:23:13.621 Test: blockdev write zeroes read block ...passed 00:23:13.621 Test: blockdev write zeroes read no split ...passed 00:23:13.621 Test: blockdev write zeroes read split ...passed 00:23:13.621 Test: blockdev write zeroes read split partial ...passed 00:23:13.621 Test: blockdev reset ...[2024-10-11 22:46:16.726358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:13.621 [2024-10-11 22:46:16.726467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df5570 (9): Bad file descriptor 00:23:13.621 [2024-10-11 22:46:16.875215] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:13.621 passed 00:23:13.878 Test: blockdev write read 8 blocks ...passed 00:23:13.878 Test: blockdev write read size > 128k ...passed 00:23:13.878 Test: blockdev write read invalid size ...passed 00:23:13.878 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:13.878 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:13.878 Test: blockdev write read max offset ...passed 00:23:13.878 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:13.878 Test: blockdev writev readv 8 blocks ...passed 00:23:13.878 Test: blockdev writev readv 30 x 1block ...passed 00:23:13.878 Test: blockdev writev readv block ...passed 00:23:13.878 Test: blockdev writev readv size > 128k ...passed 00:23:13.878 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:13.878 Test: blockdev comparev and writev ...[2024-10-11 22:46:17.129751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:13.878 [2024-10-11 22:46:17.129787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.878 [2024-10-11 22:46:17.129812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:13.878 [2024-10-11 22:46:17.129829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:13.878 [2024-10-11 22:46:17.130153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:13.878 [2024-10-11 22:46:17.130178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:13.878 [2024-10-11 22:46:17.130209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:13.878 [2024-10-11 22:46:17.130228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:13.879 [2024-10-11 22:46:17.130540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:13.879 [2024-10-11 22:46:17.130571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:13.879 [2024-10-11 22:46:17.130594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:13.879 [2024-10-11 22:46:17.130610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:13.879 [2024-10-11 22:46:17.130945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:13.879 [2024-10-11 22:46:17.130970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:13.879 [2024-10-11 22:46:17.130991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:13.879 [2024-10-11 22:46:17.131007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:14.137 passed 00:23:14.137 Test: blockdev nvme passthru rw ...passed 00:23:14.137 Test: blockdev nvme passthru vendor specific ...[2024-10-11 22:46:17.213827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:14.137 [2024-10-11 22:46:17.213857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:14.137 [2024-10-11 22:46:17.214000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:14.137 [2024-10-11 22:46:17.214025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:14.137 [2024-10-11 22:46:17.214168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:14.137 [2024-10-11 22:46:17.214192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:14.137 [2024-10-11 22:46:17.214331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:14.137 [2024-10-11 22:46:17.214355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:14.137 passed 00:23:14.137 Test: blockdev nvme admin passthru ...passed 00:23:14.137 Test: blockdev copy ...passed 00:23:14.137 00:23:14.137 Run Summary: Type Total Ran Passed Failed Inactive 00:23:14.137 suites 1 1 n/a 0 0 00:23:14.137 tests 23 23 23 0 0 00:23:14.137 asserts 152 152 152 0 n/a 00:23:14.137 00:23:14.137 Elapsed time = 1.325 seconds 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.395 rmmod nvme_tcp 00:23:14.395 rmmod nvme_fabrics 00:23:14.395 rmmod nvme_keyring 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 269453 ']' 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 269453 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 269453 ']' 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 269453 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.395 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 269453 00:23:14.653 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:14.653 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:14.653 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 269453' 00:23:14.653 killing process with pid 269453 00:23:14.653 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 269453 00:23:14.653 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 269453 00:23:14.911 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:14.911 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:14.911 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:14.911 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:14.911 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:23:14.911 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:14.911 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:23:14.911 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.911 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.911 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.911 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.911 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.448 00:23:17.448 real 0m6.758s 00:23:17.448 user 0m11.009s 00:23:17.448 sys 0m2.698s 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:17.448 ************************************ 00:23:17.448 END TEST nvmf_bdevio_no_huge 00:23:17.448 ************************************ 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:17.448 ************************************ 00:23:17.448 START TEST nvmf_tls 00:23:17.448 ************************************ 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:17.448 * Looking for test storage... 00:23:17.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:17.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.448 --rc genhtml_branch_coverage=1 00:23:17.448 --rc genhtml_function_coverage=1 00:23:17.448 --rc genhtml_legend=1 00:23:17.448 --rc geninfo_all_blocks=1 00:23:17.448 --rc geninfo_unexecuted_blocks=1 00:23:17.448 00:23:17.448 ' 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:17.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.448 --rc genhtml_branch_coverage=1 00:23:17.448 --rc genhtml_function_coverage=1 00:23:17.448 --rc genhtml_legend=1 00:23:17.448 --rc geninfo_all_blocks=1 00:23:17.448 --rc geninfo_unexecuted_blocks=1 00:23:17.448 00:23:17.448 ' 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:17.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.448 --rc genhtml_branch_coverage=1 00:23:17.448 --rc genhtml_function_coverage=1 00:23:17.448 --rc genhtml_legend=1 00:23:17.448 --rc geninfo_all_blocks=1 00:23:17.448 --rc geninfo_unexecuted_blocks=1 00:23:17.448 00:23:17.448 ' 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:17.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.448 --rc genhtml_branch_coverage=1 00:23:17.448 --rc genhtml_function_coverage=1 00:23:17.448 --rc genhtml_legend=1 00:23:17.448 --rc geninfo_all_blocks=1 00:23:17.448 --rc geninfo_unexecuted_blocks=1 00:23:17.448 00:23:17.448 ' 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.448 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.449 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:19.351 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:19.351 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:19.351 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:19.351 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:19.351 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:19.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:23:19.610 00:23:19.610 --- 10.0.0.2 ping statistics --- 00:23:19.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.610 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:23:19.610 00:23:19.610 --- 10.0.0.1 ping statistics --- 00:23:19.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.610 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=271686 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 271686 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 271686 ']' 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.610 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.610 [2024-10-11 22:46:22.743293] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:23:19.610 [2024-10-11 22:46:22.743406] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.610 [2024-10-11 22:46:22.809058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.610 [2024-10-11 22:46:22.850951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.610 [2024-10-11 22:46:22.851013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.610 [2024-10-11 22:46:22.851036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.610 [2024-10-11 22:46:22.851046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.610 [2024-10-11 22:46:22.851055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.610 [2024-10-11 22:46:22.851657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.868 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.868 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:19.868 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:19.868 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.868 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.868 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.868 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:19.868 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:20.126 true 00:23:20.126 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:20.126 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:20.384 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:20.384 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:20.384 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:20.642 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:20.642 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:20.900 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:20.900 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:20.900 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:21.158 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:21.158 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:21.417 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:21.417 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:21.417 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:21.417 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:21.675 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:21.675 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:21.675 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:21.932 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:21.933 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:22.191 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:22.191 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:22.191 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:22.762 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:22.762 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:22.762 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:22.762 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:22.762 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:22.762 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:22.762 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:22.762 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:22.762 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:23:22.762 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:22.762 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.d32tio7tS1 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.klYEdRJfuf 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.d32tio7tS1 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.klYEdRJfuf 00:23:23.021 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:23.279 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:23.538 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.d32tio7tS1 00:23:23.538 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.d32tio7tS1 00:23:23.538 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:23.795 [2024-10-11 22:46:27.032980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.795 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:24.361 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:24.361 [2024-10-11 22:46:27.566387] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.361 [2024-10-11 22:46:27.566640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.361 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:24.619 malloc0 00:23:24.619 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:25.185 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.d32tio7tS1 00:23:25.442 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:25.700 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.d32tio7tS1 00:23:35.668 Initializing NVMe Controllers 00:23:35.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:35.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:35.668 Initialization complete. Launching workers. 00:23:35.668 ======================================================== 00:23:35.668 Latency(us) 00:23:35.668 Device Information : IOPS MiB/s Average min max 00:23:35.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8526.17 33.31 7508.29 985.41 42202.81 00:23:35.668 ======================================================== 00:23:35.668 Total : 8526.17 33.31 7508.29 985.41 42202.81 00:23:35.668 00:23:35.668 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.d32tio7tS1 00:23:35.668 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:35.668 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:35.668 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:35.668 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.d32tio7tS1 00:23:35.668 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.668 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=273582 00:23:35.668 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.668 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.668 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 273582 /var/tmp/bdevperf.sock 00:23:35.669 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 273582 ']' 00:23:35.669 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.669 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.669 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.669 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.669 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.669 [2024-10-11 22:46:38.934885] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:23:35.669 [2024-10-11 22:46:38.934970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273582 ] 00:23:35.928 [2024-10-11 22:46:38.992535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.928 [2024-10-11 22:46:39.039210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.928 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.928 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:35.928 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.d32tio7tS1 00:23:36.185 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.752 [2024-10-11 22:46:39.714516] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.752 TLSTESTn1 00:23:36.752 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:36.752 Running I/O for 10 seconds... 00:23:39.059 3286.00 IOPS, 12.84 MiB/s [2024-10-11T20:46:43.262Z] 3247.00 IOPS, 12.68 MiB/s [2024-10-11T20:46:44.195Z] 3256.67 IOPS, 12.72 MiB/s [2024-10-11T20:46:45.127Z] 3217.50 IOPS, 12.57 MiB/s [2024-10-11T20:46:46.064Z] 3267.60 IOPS, 12.76 MiB/s [2024-10-11T20:46:46.997Z] 3280.17 IOPS, 12.81 MiB/s [2024-10-11T20:46:47.930Z] 3295.57 IOPS, 12.87 MiB/s [2024-10-11T20:46:49.302Z] 3293.12 IOPS, 12.86 MiB/s [2024-10-11T20:46:50.236Z] 3299.33 IOPS, 12.89 MiB/s [2024-10-11T20:46:50.236Z] 3306.80 IOPS, 12.92 MiB/s 00:23:46.968 Latency(us) 00:23:46.968 [2024-10-11T20:46:50.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.968 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:46.968 Verification LBA range: start 0x0 length 0x2000 00:23:46.968 TLSTESTn1 : 10.03 3309.86 12.93 0.00 0.00 38591.61 8592.50 60584.39 00:23:46.968 [2024-10-11T20:46:50.236Z] =================================================================================================================== 00:23:46.968 [2024-10-11T20:46:50.236Z] Total : 3309.86 12.93 0.00 0.00 38591.61 8592.50 60584.39 00:23:46.968 { 00:23:46.968 "results": [ 00:23:46.968 { 00:23:46.968 "job": "TLSTESTn1", 00:23:46.968 "core_mask": "0x4", 00:23:46.968 "workload": "verify", 00:23:46.968 "status": "finished", 00:23:46.968 "verify_range": { 00:23:46.968 "start": 0, 00:23:46.968 "length": 8192 00:23:46.968 }, 00:23:46.968 "queue_depth": 128, 00:23:46.968 "io_size": 4096, 00:23:46.968 "runtime": 10.029118, 00:23:46.968 "iops": 3309.862342830147, 00:23:46.968 "mibps": 12.929149776680262, 00:23:46.968 "io_failed": 0, 00:23:46.968 "io_timeout": 0, 00:23:46.968 "avg_latency_us": 38591.60662205932, 00:23:46.968 "min_latency_us": 8592.497777777779, 00:23:46.968 "max_latency_us": 60584.39111111111 00:23:46.968 } 00:23:46.968 ], 00:23:46.968 "core_count": 1 00:23:46.968 } 00:23:46.968 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.968 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 273582 00:23:46.968 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 273582 ']' 00:23:46.968 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 273582 00:23:46.968 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:46.968 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.968 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 273582 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 273582' 00:23:46.968 killing process with pid 273582 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 273582 00:23:46.968 Received shutdown signal, test time was about 10.000000 seconds 00:23:46.968 00:23:46.968 Latency(us) 00:23:46.968 [2024-10-11T20:46:50.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.968 [2024-10-11T20:46:50.236Z] =================================================================================================================== 00:23:46.968 [2024-10-11T20:46:50.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 273582 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.klYEdRJfuf 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.klYEdRJfuf 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.klYEdRJfuf 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.klYEdRJfuf 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274902 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274902 /var/tmp/bdevperf.sock 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 274902 ']' 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.968 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.227 [2024-10-11 22:46:50.262516] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:23:47.227 [2024-10-11 22:46:50.262641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274902 ] 00:23:47.227 [2024-10-11 22:46:50.324349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.227 [2024-10-11 22:46:50.371410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.227 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:47.227 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:47.227 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.klYEdRJfuf 00:23:47.792 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:48.051 [2024-10-11 22:46:51.063727] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.051 [2024-10-11 22:46:51.071101] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:48.051 [2024-10-11 22:46:51.071975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf6b70 (107): Transport endpoint is not connected 00:23:48.051 [2024-10-11 22:46:51.072966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf6b70 (9): Bad file descriptor 00:23:48.051 [2024-10-11 22:46:51.073964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.051 [2024-10-11 22:46:51.073984] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:48.051 [2024-10-11 22:46:51.073997] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:48.051 [2024-10-11 22:46:51.074014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.051 request: 00:23:48.051 { 00:23:48.051 "name": "TLSTEST", 00:23:48.051 "trtype": "tcp", 00:23:48.051 "traddr": "10.0.0.2", 00:23:48.051 "adrfam": "ipv4", 00:23:48.051 "trsvcid": "4420", 00:23:48.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.051 "prchk_reftag": false, 00:23:48.051 "prchk_guard": false, 00:23:48.051 "hdgst": false, 00:23:48.051 "ddgst": false, 00:23:48.051 "psk": "key0", 00:23:48.051 "allow_unrecognized_csi": false, 00:23:48.051 "method": "bdev_nvme_attach_controller", 00:23:48.051 "req_id": 1 00:23:48.051 } 00:23:48.051 Got JSON-RPC error response 00:23:48.051 response: 00:23:48.051 { 00:23:48.051 "code": -5, 00:23:48.051 "message": "Input/output error" 00:23:48.051 } 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 274902 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 274902 ']' 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 274902 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 274902 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 274902' 00:23:48.051 killing process with pid 274902 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 274902 00:23:48.051 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.051 00:23:48.051 Latency(us) 00:23:48.051 [2024-10-11T20:46:51.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.051 [2024-10-11T20:46:51.319Z] =================================================================================================================== 00:23:48.051 [2024-10-11T20:46:51.319Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 274902 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.d32tio7tS1 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.d32tio7tS1 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.d32tio7tS1 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.d32tio7tS1 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275039 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275039 /var/tmp/bdevperf.sock 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 275039 ']' 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.051 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.309 [2024-10-11 22:46:51.359955] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:23:48.309 [2024-10-11 22:46:51.360053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275039 ] 00:23:48.309 [2024-10-11 22:46:51.418490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.309 [2024-10-11 22:46:51.461330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.567 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:48.567 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:48.567 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.d32tio7tS1 00:23:48.824 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:49.083 [2024-10-11 22:46:52.107726] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.083 [2024-10-11 22:46:52.113689] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:49.083 [2024-10-11 22:46:52.113722] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:49.083 [2024-10-11 22:46:52.113780] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:49.083 [2024-10-11 22:46:52.114714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2234b70 (107): Transport endpoint is not connected 00:23:49.083 [2024-10-11 22:46:52.115704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2234b70 (9): Bad file descriptor 00:23:49.083 [2024-10-11 22:46:52.116702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.083 [2024-10-11 22:46:52.116728] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:49.083 [2024-10-11 22:46:52.116742] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:49.083 [2024-10-11 22:46:52.116759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.083 request: 00:23:49.083 { 00:23:49.083 "name": "TLSTEST", 00:23:49.083 "trtype": "tcp", 00:23:49.083 "traddr": "10.0.0.2", 00:23:49.083 "adrfam": "ipv4", 00:23:49.083 "trsvcid": "4420", 00:23:49.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.083 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:49.083 "prchk_reftag": false, 00:23:49.083 "prchk_guard": false, 00:23:49.083 "hdgst": false, 00:23:49.083 "ddgst": false, 00:23:49.083 "psk": "key0", 00:23:49.083 "allow_unrecognized_csi": false, 00:23:49.083 "method": "bdev_nvme_attach_controller", 00:23:49.083 "req_id": 1 00:23:49.083 } 00:23:49.083 Got JSON-RPC error response 00:23:49.083 response: 00:23:49.083 { 00:23:49.083 "code": -5, 00:23:49.083 "message": "Input/output error" 00:23:49.083 } 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275039 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 275039 ']' 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 275039 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 275039 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 275039' 00:23:49.083 killing process with pid 275039 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 275039 00:23:49.083 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.083 00:23:49.083 Latency(us) 00:23:49.083 [2024-10-11T20:46:52.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.083 [2024-10-11T20:46:52.351Z] =================================================================================================================== 00:23:49.083 [2024-10-11T20:46:52.351Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 275039 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.d32tio7tS1 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.d32tio7tS1 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.d32tio7tS1 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.d32tio7tS1 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275180 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275180 /var/tmp/bdevperf.sock 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 275180 ']' 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.083 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.347 [2024-10-11 22:46:52.392905] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:23:49.347 [2024-10-11 22:46:52.392997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275180 ] 00:23:49.347 [2024-10-11 22:46:52.453729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.347 [2024-10-11 22:46:52.504778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.603 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.603 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:49.604 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.d32tio7tS1 00:23:49.860 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:50.117 [2024-10-11 22:46:53.143796] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.117 [2024-10-11 22:46:53.149183] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:50.117 [2024-10-11 22:46:53.149216] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:50.117 [2024-10-11 22:46:53.149261] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:50.117 [2024-10-11 22:46:53.149815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20aab70 (107): Transport endpoint is not connected 00:23:50.117 [2024-10-11 22:46:53.150804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20aab70 (9): Bad file descriptor 00:23:50.117 [2024-10-11 22:46:53.151802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:50.117 [2024-10-11 22:46:53.151824] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:50.117 [2024-10-11 22:46:53.151838] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:50.117 [2024-10-11 22:46:53.151856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:50.117 request: 00:23:50.117 { 00:23:50.117 "name": "TLSTEST", 00:23:50.117 "trtype": "tcp", 00:23:50.117 "traddr": "10.0.0.2", 00:23:50.117 "adrfam": "ipv4", 00:23:50.117 "trsvcid": "4420", 00:23:50.117 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:50.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.117 "prchk_reftag": false, 00:23:50.117 "prchk_guard": false, 00:23:50.117 "hdgst": false, 00:23:50.117 "ddgst": false, 00:23:50.117 "psk": "key0", 00:23:50.117 "allow_unrecognized_csi": false, 00:23:50.117 "method": "bdev_nvme_attach_controller", 00:23:50.117 "req_id": 1 00:23:50.117 } 00:23:50.117 Got JSON-RPC error response 00:23:50.117 response: 00:23:50.117 { 00:23:50.117 "code": -5, 00:23:50.117 "message": "Input/output error" 00:23:50.117 } 00:23:50.117 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275180 00:23:50.117 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 275180 ']' 00:23:50.117 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 275180 00:23:50.117 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:50.117 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:50.117 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 275180 00:23:50.117 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:50.117 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:50.117 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 275180' 00:23:50.117 killing process with pid 275180 00:23:50.117 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 275180 00:23:50.117 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.117 00:23:50.117 Latency(us) 00:23:50.117 [2024-10-11T20:46:53.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.117 [2024-10-11T20:46:53.385Z] =================================================================================================================== 00:23:50.117 [2024-10-11T20:46:53.385Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:50.117 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 275180 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275295 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275295 /var/tmp/bdevperf.sock 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 275295 ']' 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.376 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.376 [2024-10-11 22:46:53.440877] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:23:50.376 [2024-10-11 22:46:53.440973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275295 ] 00:23:50.376 [2024-10-11 22:46:53.500920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.376 [2024-10-11 22:46:53.549368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.634 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.634 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:50.634 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:50.892 [2024-10-11 22:46:53.937661] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:50.892 [2024-10-11 22:46:53.937708] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:50.892 request: 00:23:50.892 { 00:23:50.892 "name": "key0", 00:23:50.892 "path": "", 00:23:50.892 "method": "keyring_file_add_key", 00:23:50.892 "req_id": 1 00:23:50.892 } 00:23:50.892 Got JSON-RPC error response 00:23:50.892 response: 00:23:50.892 { 00:23:50.892 "code": -1, 00:23:50.892 "message": "Operation not permitted" 00:23:50.892 } 00:23:50.892 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.150 [2024-10-11 22:46:54.202459] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.150 [2024-10-11 22:46:54.202509] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:51.150 request: 00:23:51.150 { 00:23:51.150 "name": "TLSTEST", 00:23:51.150 "trtype": "tcp", 00:23:51.150 "traddr": "10.0.0.2", 00:23:51.150 "adrfam": "ipv4", 00:23:51.150 "trsvcid": "4420", 00:23:51.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.150 "prchk_reftag": false, 00:23:51.150 "prchk_guard": false, 00:23:51.150 "hdgst": false, 00:23:51.150 "ddgst": false, 00:23:51.150 "psk": "key0", 00:23:51.150 "allow_unrecognized_csi": false, 00:23:51.150 "method": "bdev_nvme_attach_controller", 00:23:51.150 "req_id": 1 00:23:51.150 } 00:23:51.150 Got JSON-RPC error response 00:23:51.150 response: 00:23:51.150 { 00:23:51.150 "code": -126, 00:23:51.150 "message": "Required key not available" 00:23:51.150 } 00:23:51.150 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275295 00:23:51.150 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 275295 ']' 00:23:51.150 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 275295 00:23:51.150 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:51.150 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.150 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 275295 00:23:51.150 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:51.150 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:51.150 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 275295' 00:23:51.150 killing process with pid 275295 00:23:51.150 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 275295 00:23:51.150 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.150 00:23:51.150 Latency(us) 00:23:51.150 [2024-10-11T20:46:54.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.150 [2024-10-11T20:46:54.418Z] =================================================================================================================== 00:23:51.150 [2024-10-11T20:46:54.418Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:51.150 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 275295 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 271686 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 271686 ']' 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 271686 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 271686 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 271686' 00:23:51.408 killing process with pid 271686 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 271686 00:23:51.408 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 271686 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.n0H6QGcfsI 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.n0H6QGcfsI 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=275470 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 275470 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 275470 ']' 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.667 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.667 [2024-10-11 22:46:54.785372] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:23:51.667 [2024-10-11 22:46:54.785449] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.667 [2024-10-11 22:46:54.850733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.667 [2024-10-11 22:46:54.898516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.667 [2024-10-11 22:46:54.898590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.667 [2024-10-11 22:46:54.898605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.667 [2024-10-11 22:46:54.898616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.667 [2024-10-11 22:46:54.898626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.667 [2024-10-11 22:46:54.899207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.925 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.925 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:51.925 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:51.925 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.925 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.925 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.925 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.n0H6QGcfsI 00:23:51.925 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.n0H6QGcfsI 00:23:51.925 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:52.183 [2024-10-11 22:46:55.285015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.183 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:52.441 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:52.699 [2024-10-11 22:46:55.838503] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.699 [2024-10-11 22:46:55.838758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.699 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:52.956 malloc0 00:23:52.956 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:53.214 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.n0H6QGcfsI 00:23:53.471 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n0H6QGcfsI 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.n0H6QGcfsI 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275765 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275765 /var/tmp/bdevperf.sock 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 275765 ']' 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.729 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.729 [2024-10-11 22:46:56.990426] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:23:53.729 [2024-10-11 22:46:56.990516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275765 ] 00:23:53.988 [2024-10-11 22:46:57.051253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.988 [2024-10-11 22:46:57.098025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.988 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.988 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:53.988 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n0H6QGcfsI 00:23:54.245 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.503 [2024-10-11 22:46:57.745406] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.761 TLSTESTn1 00:23:54.761 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:54.761 Running I/O for 10 seconds... 00:23:57.065 3143.00 IOPS, 12.28 MiB/s [2024-10-11T20:47:01.265Z] 3196.00 IOPS, 12.48 MiB/s [2024-10-11T20:47:02.198Z] 3234.33 IOPS, 12.63 MiB/s [2024-10-11T20:47:03.131Z] 3263.25 IOPS, 12.75 MiB/s [2024-10-11T20:47:04.064Z] 3264.80 IOPS, 12.75 MiB/s [2024-10-11T20:47:04.997Z] 3250.00 IOPS, 12.70 MiB/s [2024-10-11T20:47:06.369Z] 3221.00 IOPS, 12.58 MiB/s [2024-10-11T20:47:07.302Z] 3237.88 IOPS, 12.65 MiB/s [2024-10-11T20:47:08.235Z] 3236.22 IOPS, 12.64 MiB/s [2024-10-11T20:47:08.235Z] 3246.40 IOPS, 12.68 MiB/s 00:24:04.967 Latency(us) 00:24:04.967 [2024-10-11T20:47:08.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.967 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:04.967 Verification LBA range: start 0x0 length 0x2000 00:24:04.967 TLSTESTn1 : 10.04 3247.16 12.68 0.00 0.00 39329.90 8107.05 35340.89 00:24:04.967 [2024-10-11T20:47:08.235Z] =================================================================================================================== 00:24:04.967 [2024-10-11T20:47:08.235Z] Total : 3247.16 12.68 0.00 0.00 39329.90 8107.05 35340.89 00:24:04.967 { 00:24:04.967 "results": [ 00:24:04.967 { 00:24:04.967 "job": "TLSTESTn1", 00:24:04.967 "core_mask": "0x4", 00:24:04.967 "workload": "verify", 00:24:04.967 "status": "finished", 00:24:04.967 "verify_range": { 00:24:04.967 "start": 0, 00:24:04.967 "length": 8192 00:24:04.967 }, 00:24:04.967 "queue_depth": 128, 00:24:04.967 "io_size": 4096, 00:24:04.967 "runtime": 10.036782, 00:24:04.967 "iops": 3247.156309661802, 00:24:04.967 "mibps": 12.684204334616414, 00:24:04.967 "io_failed": 0, 00:24:04.967 "io_timeout": 0, 00:24:04.967 "avg_latency_us": 39329.89964627817, 00:24:04.967 "min_latency_us": 8107.045925925926, 00:24:04.967 "max_latency_us": 35340.89481481481 00:24:04.967 } 00:24:04.967 ], 00:24:04.967 "core_count": 1 00:24:04.967 } 00:24:04.967 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.967 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 275765 00:24:04.967 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 275765 ']' 00:24:04.967 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 275765 00:24:04.967 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:04.967 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.967 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 275765 00:24:04.967 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:04.967 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:04.967 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 275765' 00:24:04.967 killing process with pid 275765 00:24:04.967 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 275765 00:24:04.967 Received shutdown signal, test time was about 10.000000 seconds 00:24:04.967 00:24:04.967 Latency(us) 00:24:04.967 [2024-10-11T20:47:08.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.967 [2024-10-11T20:47:08.235Z] =================================================================================================================== 00:24:04.967 [2024-10-11T20:47:08.235Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.967 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 275765 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.n0H6QGcfsI 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n0H6QGcfsI 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n0H6QGcfsI 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n0H6QGcfsI 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.n0H6QGcfsI 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=277061 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 277061 /var/tmp/bdevperf.sock 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 277061 ']' 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:05.226 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.226 [2024-10-11 22:47:08.312360] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:05.226 [2024-10-11 22:47:08.312455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277061 ] 00:24:05.226 [2024-10-11 22:47:08.381184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.226 [2024-10-11 22:47:08.433762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.484 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.484 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:05.484 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n0H6QGcfsI 00:24:05.742 [2024-10-11 22:47:08.867923] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.n0H6QGcfsI': 0100666 00:24:05.742 [2024-10-11 22:47:08.867968] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:05.742 request: 00:24:05.742 { 00:24:05.742 "name": "key0", 00:24:05.742 "path": "/tmp/tmp.n0H6QGcfsI", 00:24:05.742 "method": "keyring_file_add_key", 00:24:05.742 "req_id": 1 00:24:05.742 } 00:24:05.742 Got JSON-RPC error response 00:24:05.742 response: 00:24:05.742 { 00:24:05.742 "code": -1, 00:24:05.742 "message": "Operation not permitted" 00:24:05.742 } 00:24:05.742 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:06.000 [2024-10-11 22:47:09.196904] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.000 [2024-10-11 22:47:09.196948] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:06.000 request: 00:24:06.000 { 00:24:06.000 "name": "TLSTEST", 00:24:06.000 "trtype": "tcp", 00:24:06.000 "traddr": "10.0.0.2", 00:24:06.000 "adrfam": "ipv4", 00:24:06.000 "trsvcid": "4420", 00:24:06.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.000 "prchk_reftag": false, 00:24:06.000 "prchk_guard": false, 00:24:06.000 "hdgst": false, 00:24:06.000 "ddgst": false, 00:24:06.000 "psk": "key0", 00:24:06.000 "allow_unrecognized_csi": false, 00:24:06.000 "method": "bdev_nvme_attach_controller", 00:24:06.000 "req_id": 1 00:24:06.000 } 00:24:06.000 Got JSON-RPC error response 00:24:06.000 response: 00:24:06.000 { 00:24:06.000 "code": -126, 00:24:06.000 "message": "Required key not available" 00:24:06.000 } 00:24:06.000 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 277061 00:24:06.000 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 277061 ']' 00:24:06.000 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 277061 00:24:06.000 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:06.000 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.000 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 277061 00:24:06.000 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:06.000 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:06.000 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 277061' 00:24:06.000 killing process with pid 277061 00:24:06.000 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 277061 00:24:06.000 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.000 00:24:06.000 Latency(us) 00:24:06.000 [2024-10-11T20:47:09.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.000 [2024-10-11T20:47:09.268Z] =================================================================================================================== 00:24:06.000 [2024-10-11T20:47:09.268Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:06.000 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 277061 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 275470 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 275470 ']' 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 275470 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 275470 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 275470' 00:24:06.258 killing process with pid 275470 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 275470 00:24:06.258 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 275470 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=277230 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 277230 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 277230 ']' 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:06.516 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.516 [2024-10-11 22:47:09.724701] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:06.516 [2024-10-11 22:47:09.724795] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.775 [2024-10-11 22:47:09.789307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.775 [2024-10-11 22:47:09.833626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.775 [2024-10-11 22:47:09.833693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.775 [2024-10-11 22:47:09.833716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.775 [2024-10-11 22:47:09.833727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.775 [2024-10-11 22:47:09.833736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.775 [2024-10-11 22:47:09.834279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.n0H6QGcfsI 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.n0H6QGcfsI 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.n0H6QGcfsI 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.n0H6QGcfsI 00:24:06.775 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:07.032 [2024-10-11 22:47:10.224106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.032 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:07.290 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:07.547 [2024-10-11 22:47:10.765607] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.547 [2024-10-11 22:47:10.765850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.547 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:07.804 malloc0 00:24:07.804 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:08.061 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.n0H6QGcfsI 00:24:08.318 [2024-10-11 22:47:11.578856] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.n0H6QGcfsI': 0100666 00:24:08.318 [2024-10-11 22:47:11.578893] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:08.318 request: 00:24:08.318 { 00:24:08.318 "name": "key0", 00:24:08.318 "path": "/tmp/tmp.n0H6QGcfsI", 00:24:08.318 "method": "keyring_file_add_key", 00:24:08.318 "req_id": 1 00:24:08.318 } 00:24:08.318 Got JSON-RPC error response 00:24:08.318 response: 00:24:08.318 { 00:24:08.318 "code": -1, 00:24:08.318 "message": "Operation not permitted" 00:24:08.318 } 00:24:08.576 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:08.834 [2024-10-11 22:47:11.863692] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:08.834 [2024-10-11 22:47:11.863762] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:08.834 request: 00:24:08.834 { 00:24:08.834 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.834 "host": "nqn.2016-06.io.spdk:host1", 00:24:08.834 "psk": "key0", 00:24:08.834 "method": "nvmf_subsystem_add_host", 00:24:08.834 "req_id": 1 00:24:08.834 } 00:24:08.834 Got JSON-RPC error response 00:24:08.834 response: 00:24:08.834 { 00:24:08.834 "code": -32603, 00:24:08.834 "message": "Internal error" 00:24:08.834 } 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 277230 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 277230 ']' 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 277230 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 277230 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 277230' 00:24:08.834 killing process with pid 277230 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 277230 00:24:08.834 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 277230 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.n0H6QGcfsI 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=277532 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 277532 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 277532 ']' 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:09.092 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.092 [2024-10-11 22:47:12.166624] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:09.092 [2024-10-11 22:47:12.166735] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.092 [2024-10-11 22:47:12.228750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.092 [2024-10-11 22:47:12.269083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.092 [2024-10-11 22:47:12.269160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.092 [2024-10-11 22:47:12.269182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.092 [2024-10-11 22:47:12.269193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.092 [2024-10-11 22:47:12.269202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.092 [2024-10-11 22:47:12.269734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.350 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:09.350 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:09.350 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:09.350 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:09.350 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.350 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.350 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.n0H6QGcfsI 00:24:09.350 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.n0H6QGcfsI 00:24:09.350 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:09.607 [2024-10-11 22:47:12.652715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.607 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:09.865 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:10.123 [2024-10-11 22:47:13.206272] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:10.123 [2024-10-11 22:47:13.206556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.123 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:10.381 malloc0 00:24:10.381 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:10.638 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.n0H6QGcfsI 00:24:10.896 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:11.154 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=277817 00:24:11.154 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:11.154 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.154 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 277817 /var/tmp/bdevperf.sock 00:24:11.154 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 277817 ']' 00:24:11.154 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.154 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:11.154 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.154 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:11.154 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.154 [2024-10-11 22:47:14.356099] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:11.154 [2024-10-11 22:47:14.356192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277817 ] 00:24:11.154 [2024-10-11 22:47:14.414592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.413 [2024-10-11 22:47:14.460486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.413 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.413 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:11.413 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n0H6QGcfsI 00:24:11.670 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:11.929 [2024-10-11 22:47:15.110830] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.929 TLSTESTn1 00:24:12.186 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:12.445 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:12.445 "subsystems": [ 00:24:12.445 { 00:24:12.445 "subsystem": "keyring", 00:24:12.445 "config": [ 00:24:12.445 { 00:24:12.445 "method": "keyring_file_add_key", 00:24:12.445 "params": { 00:24:12.445 "name": "key0", 00:24:12.445 "path": "/tmp/tmp.n0H6QGcfsI" 00:24:12.445 } 00:24:12.445 } 00:24:12.445 ] 00:24:12.445 }, 00:24:12.445 { 00:24:12.445 "subsystem": "iobuf", 00:24:12.445 "config": [ 00:24:12.445 { 00:24:12.445 "method": "iobuf_set_options", 00:24:12.445 "params": { 00:24:12.445 "small_pool_count": 8192, 00:24:12.445 "large_pool_count": 1024, 00:24:12.445 "small_bufsize": 8192, 00:24:12.445 "large_bufsize": 135168 00:24:12.445 } 00:24:12.445 } 00:24:12.445 ] 00:24:12.445 }, 00:24:12.445 { 00:24:12.445 "subsystem": "sock", 00:24:12.445 "config": [ 00:24:12.445 { 00:24:12.445 "method": "sock_set_default_impl", 00:24:12.445 "params": { 00:24:12.445 "impl_name": "posix" 00:24:12.445 } 00:24:12.445 }, 00:24:12.445 { 00:24:12.445 "method": "sock_impl_set_options", 00:24:12.445 "params": { 00:24:12.445 "impl_name": "ssl", 00:24:12.445 "recv_buf_size": 4096, 00:24:12.445 "send_buf_size": 4096, 00:24:12.445 "enable_recv_pipe": true, 00:24:12.445 "enable_quickack": false, 00:24:12.445 "enable_placement_id": 0, 00:24:12.445 "enable_zerocopy_send_server": true, 00:24:12.445 "enable_zerocopy_send_client": false, 00:24:12.445 "zerocopy_threshold": 0, 00:24:12.445 "tls_version": 0, 00:24:12.445 "enable_ktls": false 00:24:12.445 } 00:24:12.445 }, 00:24:12.445 { 00:24:12.445 "method": "sock_impl_set_options", 00:24:12.445 "params": { 00:24:12.445 "impl_name": "posix", 00:24:12.445 "recv_buf_size": 2097152, 00:24:12.445 "send_buf_size": 2097152, 00:24:12.445 "enable_recv_pipe": true, 00:24:12.445 "enable_quickack": false, 00:24:12.445 "enable_placement_id": 0, 00:24:12.445 "enable_zerocopy_send_server": true, 00:24:12.445 "enable_zerocopy_send_client": false, 00:24:12.445 "zerocopy_threshold": 0, 00:24:12.445 "tls_version": 0, 00:24:12.445 "enable_ktls": false 00:24:12.445 } 00:24:12.445 } 00:24:12.445 ] 00:24:12.445 }, 00:24:12.445 { 00:24:12.445 "subsystem": "vmd", 00:24:12.445 "config": [] 00:24:12.445 }, 00:24:12.445 { 00:24:12.445 "subsystem": "accel", 00:24:12.445 "config": [ 00:24:12.445 { 00:24:12.445 "method": "accel_set_options", 00:24:12.445 "params": { 00:24:12.445 "small_cache_size": 128, 00:24:12.445 "large_cache_size": 16, 00:24:12.445 "task_count": 2048, 00:24:12.445 "sequence_count": 2048, 00:24:12.445 "buf_count": 2048 00:24:12.445 } 00:24:12.445 } 00:24:12.445 ] 00:24:12.445 }, 00:24:12.445 { 00:24:12.445 "subsystem": "bdev", 00:24:12.445 "config": [ 00:24:12.445 { 00:24:12.445 "method": "bdev_set_options", 00:24:12.445 "params": { 00:24:12.445 "bdev_io_pool_size": 65535, 00:24:12.445 "bdev_io_cache_size": 256, 00:24:12.445 "bdev_auto_examine": true, 00:24:12.445 "iobuf_small_cache_size": 128, 00:24:12.445 "iobuf_large_cache_size": 16 00:24:12.445 } 00:24:12.445 }, 00:24:12.445 { 00:24:12.445 "method": "bdev_raid_set_options", 00:24:12.445 "params": { 00:24:12.445 "process_window_size_kb": 1024, 00:24:12.445 "process_max_bandwidth_mb_sec": 0 00:24:12.445 } 00:24:12.445 }, 00:24:12.445 { 00:24:12.445 "method": "bdev_iscsi_set_options", 00:24:12.445 "params": { 00:24:12.445 "timeout_sec": 30 00:24:12.445 } 00:24:12.445 }, 00:24:12.445 { 00:24:12.445 "method": "bdev_nvme_set_options", 00:24:12.445 "params": { 00:24:12.445 "action_on_timeout": "none", 00:24:12.445 "timeout_us": 0, 00:24:12.445 "timeout_admin_us": 0, 00:24:12.445 "keep_alive_timeout_ms": 10000, 00:24:12.445 "arbitration_burst": 0, 00:24:12.445 "low_priority_weight": 0, 00:24:12.445 "medium_priority_weight": 0, 00:24:12.445 "high_priority_weight": 0, 00:24:12.445 "nvme_adminq_poll_period_us": 10000, 00:24:12.445 "nvme_ioq_poll_period_us": 0, 00:24:12.445 "io_queue_requests": 0, 00:24:12.445 "delay_cmd_submit": true, 00:24:12.445 "transport_retry_count": 4, 00:24:12.445 "bdev_retry_count": 3, 00:24:12.445 "transport_ack_timeout": 0, 00:24:12.445 "ctrlr_loss_timeout_sec": 0, 00:24:12.445 "reconnect_delay_sec": 0, 00:24:12.445 "fast_io_fail_timeout_sec": 0, 00:24:12.445 "disable_auto_failback": false, 00:24:12.445 "generate_uuids": false, 00:24:12.445 "transport_tos": 0, 00:24:12.445 "nvme_error_stat": false, 00:24:12.445 "rdma_srq_size": 0, 00:24:12.445 "io_path_stat": false, 00:24:12.445 "allow_accel_sequence": false, 00:24:12.445 "rdma_max_cq_size": 0, 00:24:12.445 "rdma_cm_event_timeout_ms": 0, 00:24:12.445 "dhchap_digests": [ 00:24:12.445 "sha256", 00:24:12.445 "sha384", 00:24:12.445 "sha512" 00:24:12.445 ], 00:24:12.445 "dhchap_dhgroups": [ 00:24:12.446 "null", 00:24:12.446 "ffdhe2048", 00:24:12.446 "ffdhe3072", 00:24:12.446 "ffdhe4096", 00:24:12.446 "ffdhe6144", 00:24:12.446 "ffdhe8192" 00:24:12.446 ] 00:24:12.446 } 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "method": "bdev_nvme_set_hotplug", 00:24:12.446 "params": { 00:24:12.446 "period_us": 100000, 00:24:12.446 "enable": false 00:24:12.446 } 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "method": "bdev_malloc_create", 00:24:12.446 "params": { 00:24:12.446 "name": "malloc0", 00:24:12.446 "num_blocks": 8192, 00:24:12.446 "block_size": 4096, 00:24:12.446 "physical_block_size": 4096, 00:24:12.446 "uuid": "32f148cf-0be3-4992-ae05-7baee4f3a755", 00:24:12.446 "optimal_io_boundary": 0, 00:24:12.446 "md_size": 0, 00:24:12.446 "dif_type": 0, 00:24:12.446 "dif_is_head_of_md": false, 00:24:12.446 "dif_pi_format": 0 00:24:12.446 } 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "method": "bdev_wait_for_examine" 00:24:12.446 } 00:24:12.446 ] 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "subsystem": "nbd", 00:24:12.446 "config": [] 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "subsystem": "scheduler", 00:24:12.446 "config": [ 00:24:12.446 { 00:24:12.446 "method": "framework_set_scheduler", 00:24:12.446 "params": { 00:24:12.446 "name": "static" 00:24:12.446 } 00:24:12.446 } 00:24:12.446 ] 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "subsystem": "nvmf", 00:24:12.446 "config": [ 00:24:12.446 { 00:24:12.446 "method": "nvmf_set_config", 00:24:12.446 "params": { 00:24:12.446 "discovery_filter": "match_any", 00:24:12.446 "admin_cmd_passthru": { 00:24:12.446 "identify_ctrlr": false 00:24:12.446 }, 00:24:12.446 "dhchap_digests": [ 00:24:12.446 "sha256", 00:24:12.446 "sha384", 00:24:12.446 "sha512" 00:24:12.446 ], 00:24:12.446 "dhchap_dhgroups": [ 00:24:12.446 "null", 00:24:12.446 "ffdhe2048", 00:24:12.446 "ffdhe3072", 00:24:12.446 "ffdhe4096", 00:24:12.446 "ffdhe6144", 00:24:12.446 "ffdhe8192" 00:24:12.446 ] 00:24:12.446 } 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "method": "nvmf_set_max_subsystems", 00:24:12.446 "params": { 00:24:12.446 "max_subsystems": 1024 00:24:12.446 } 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "method": "nvmf_set_crdt", 00:24:12.446 "params": { 00:24:12.446 "crdt1": 0, 00:24:12.446 "crdt2": 0, 00:24:12.446 "crdt3": 0 00:24:12.446 } 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "method": "nvmf_create_transport", 00:24:12.446 "params": { 00:24:12.446 "trtype": "TCP", 00:24:12.446 "max_queue_depth": 128, 00:24:12.446 "max_io_qpairs_per_ctrlr": 127, 00:24:12.446 "in_capsule_data_size": 4096, 00:24:12.446 "max_io_size": 131072, 00:24:12.446 "io_unit_size": 131072, 00:24:12.446 "max_aq_depth": 128, 00:24:12.446 "num_shared_buffers": 511, 00:24:12.446 "buf_cache_size": 4294967295, 00:24:12.446 "dif_insert_or_strip": false, 00:24:12.446 "zcopy": false, 00:24:12.446 "c2h_success": false, 00:24:12.446 "sock_priority": 0, 00:24:12.446 "abort_timeout_sec": 1, 00:24:12.446 "ack_timeout": 0, 00:24:12.446 "data_wr_pool_size": 0 00:24:12.446 } 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "method": "nvmf_create_subsystem", 00:24:12.446 "params": { 00:24:12.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.446 "allow_any_host": false, 00:24:12.446 "serial_number": "SPDK00000000000001", 00:24:12.446 "model_number": "SPDK bdev Controller", 00:24:12.446 "max_namespaces": 10, 00:24:12.446 "min_cntlid": 1, 00:24:12.446 "max_cntlid": 65519, 00:24:12.446 "ana_reporting": false 00:24:12.446 } 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "method": "nvmf_subsystem_add_host", 00:24:12.446 "params": { 00:24:12.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.446 "host": "nqn.2016-06.io.spdk:host1", 00:24:12.446 "psk": "key0" 00:24:12.446 } 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "method": "nvmf_subsystem_add_ns", 00:24:12.446 "params": { 00:24:12.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.446 "namespace": { 00:24:12.446 "nsid": 1, 00:24:12.446 "bdev_name": "malloc0", 00:24:12.446 "nguid": "32F148CF0BE34992AE057BAEE4F3A755", 00:24:12.446 "uuid": "32f148cf-0be3-4992-ae05-7baee4f3a755", 00:24:12.446 "no_auto_visible": false 00:24:12.446 } 00:24:12.446 } 00:24:12.446 }, 00:24:12.446 { 00:24:12.446 "method": "nvmf_subsystem_add_listener", 00:24:12.446 "params": { 00:24:12.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.446 "listen_address": { 00:24:12.446 "trtype": "TCP", 00:24:12.446 "adrfam": "IPv4", 00:24:12.446 "traddr": "10.0.0.2", 00:24:12.446 "trsvcid": "4420" 00:24:12.446 }, 00:24:12.446 "secure_channel": true 00:24:12.446 } 00:24:12.446 } 00:24:12.446 ] 00:24:12.446 } 00:24:12.446 ] 00:24:12.446 }' 00:24:12.446 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:12.704 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:12.704 "subsystems": [ 00:24:12.704 { 00:24:12.704 "subsystem": "keyring", 00:24:12.704 "config": [ 00:24:12.704 { 00:24:12.704 "method": "keyring_file_add_key", 00:24:12.704 "params": { 00:24:12.704 "name": "key0", 00:24:12.704 "path": "/tmp/tmp.n0H6QGcfsI" 00:24:12.704 } 00:24:12.704 } 00:24:12.704 ] 00:24:12.704 }, 00:24:12.704 { 00:24:12.704 "subsystem": "iobuf", 00:24:12.704 "config": [ 00:24:12.704 { 00:24:12.704 "method": "iobuf_set_options", 00:24:12.704 "params": { 00:24:12.704 "small_pool_count": 8192, 00:24:12.704 "large_pool_count": 1024, 00:24:12.704 "small_bufsize": 8192, 00:24:12.704 "large_bufsize": 135168 00:24:12.704 } 00:24:12.705 } 00:24:12.705 ] 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "subsystem": "sock", 00:24:12.705 "config": [ 00:24:12.705 { 00:24:12.705 "method": "sock_set_default_impl", 00:24:12.705 "params": { 00:24:12.705 "impl_name": "posix" 00:24:12.705 } 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "method": "sock_impl_set_options", 00:24:12.705 "params": { 00:24:12.705 "impl_name": "ssl", 00:24:12.705 "recv_buf_size": 4096, 00:24:12.705 "send_buf_size": 4096, 00:24:12.705 "enable_recv_pipe": true, 00:24:12.705 "enable_quickack": false, 00:24:12.705 "enable_placement_id": 0, 00:24:12.705 "enable_zerocopy_send_server": true, 00:24:12.705 "enable_zerocopy_send_client": false, 00:24:12.705 "zerocopy_threshold": 0, 00:24:12.705 "tls_version": 0, 00:24:12.705 "enable_ktls": false 00:24:12.705 } 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "method": "sock_impl_set_options", 00:24:12.705 "params": { 00:24:12.705 "impl_name": "posix", 00:24:12.705 "recv_buf_size": 2097152, 00:24:12.705 "send_buf_size": 2097152, 00:24:12.705 "enable_recv_pipe": true, 00:24:12.705 "enable_quickack": false, 00:24:12.705 "enable_placement_id": 0, 00:24:12.705 "enable_zerocopy_send_server": true, 00:24:12.705 "enable_zerocopy_send_client": false, 00:24:12.705 "zerocopy_threshold": 0, 00:24:12.705 "tls_version": 0, 00:24:12.705 "enable_ktls": false 00:24:12.705 } 00:24:12.705 } 00:24:12.705 ] 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "subsystem": "vmd", 00:24:12.705 "config": [] 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "subsystem": "accel", 00:24:12.705 "config": [ 00:24:12.705 { 00:24:12.705 "method": "accel_set_options", 00:24:12.705 "params": { 00:24:12.705 "small_cache_size": 128, 00:24:12.705 "large_cache_size": 16, 00:24:12.705 "task_count": 2048, 00:24:12.705 "sequence_count": 2048, 00:24:12.705 "buf_count": 2048 00:24:12.705 } 00:24:12.705 } 00:24:12.705 ] 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "subsystem": "bdev", 00:24:12.705 "config": [ 00:24:12.705 { 00:24:12.705 "method": "bdev_set_options", 00:24:12.705 "params": { 00:24:12.705 "bdev_io_pool_size": 65535, 00:24:12.705 "bdev_io_cache_size": 256, 00:24:12.705 "bdev_auto_examine": true, 00:24:12.705 "iobuf_small_cache_size": 128, 00:24:12.705 "iobuf_large_cache_size": 16 00:24:12.705 } 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "method": "bdev_raid_set_options", 00:24:12.705 "params": { 00:24:12.705 "process_window_size_kb": 1024, 00:24:12.705 "process_max_bandwidth_mb_sec": 0 00:24:12.705 } 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "method": "bdev_iscsi_set_options", 00:24:12.705 "params": { 00:24:12.705 "timeout_sec": 30 00:24:12.705 } 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "method": "bdev_nvme_set_options", 00:24:12.705 "params": { 00:24:12.705 "action_on_timeout": "none", 00:24:12.705 "timeout_us": 0, 00:24:12.705 "timeout_admin_us": 0, 00:24:12.705 "keep_alive_timeout_ms": 10000, 00:24:12.705 "arbitration_burst": 0, 00:24:12.705 "low_priority_weight": 0, 00:24:12.705 "medium_priority_weight": 0, 00:24:12.705 "high_priority_weight": 0, 00:24:12.705 "nvme_adminq_poll_period_us": 10000, 00:24:12.705 "nvme_ioq_poll_period_us": 0, 00:24:12.705 "io_queue_requests": 512, 00:24:12.705 "delay_cmd_submit": true, 00:24:12.705 "transport_retry_count": 4, 00:24:12.705 "bdev_retry_count": 3, 00:24:12.705 "transport_ack_timeout": 0, 00:24:12.705 "ctrlr_loss_timeout_sec": 0, 00:24:12.705 "reconnect_delay_sec": 0, 00:24:12.705 "fast_io_fail_timeout_sec": 0, 00:24:12.705 "disable_auto_failback": false, 00:24:12.705 "generate_uuids": false, 00:24:12.705 "transport_tos": 0, 00:24:12.705 "nvme_error_stat": false, 00:24:12.705 "rdma_srq_size": 0, 00:24:12.705 "io_path_stat": false, 00:24:12.705 "allow_accel_sequence": false, 00:24:12.705 "rdma_max_cq_size": 0, 00:24:12.705 "rdma_cm_event_timeout_ms": 0, 00:24:12.705 "dhchap_digests": [ 00:24:12.705 "sha256", 00:24:12.705 "sha384", 00:24:12.705 "sha512" 00:24:12.705 ], 00:24:12.705 "dhchap_dhgroups": [ 00:24:12.705 "null", 00:24:12.705 "ffdhe2048", 00:24:12.705 "ffdhe3072", 00:24:12.705 "ffdhe4096", 00:24:12.705 "ffdhe6144", 00:24:12.705 "ffdhe8192" 00:24:12.705 ] 00:24:12.705 } 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "method": "bdev_nvme_attach_controller", 00:24:12.705 "params": { 00:24:12.705 "name": "TLSTEST", 00:24:12.705 "trtype": "TCP", 00:24:12.705 "adrfam": "IPv4", 00:24:12.705 "traddr": "10.0.0.2", 00:24:12.705 "trsvcid": "4420", 00:24:12.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.705 "prchk_reftag": false, 00:24:12.705 "prchk_guard": false, 00:24:12.705 "ctrlr_loss_timeout_sec": 0, 00:24:12.705 "reconnect_delay_sec": 0, 00:24:12.705 "fast_io_fail_timeout_sec": 0, 00:24:12.705 "psk": "key0", 00:24:12.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.705 "hdgst": false, 00:24:12.705 "ddgst": false, 00:24:12.705 "multipath": "multipath" 00:24:12.705 } 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "method": "bdev_nvme_set_hotplug", 00:24:12.705 "params": { 00:24:12.705 "period_us": 100000, 00:24:12.705 "enable": false 00:24:12.705 } 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "method": "bdev_wait_for_examine" 00:24:12.705 } 00:24:12.705 ] 00:24:12.705 }, 00:24:12.705 { 00:24:12.705 "subsystem": "nbd", 00:24:12.705 "config": [] 00:24:12.705 } 00:24:12.705 ] 00:24:12.705 }' 00:24:12.705 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 277817 00:24:12.705 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 277817 ']' 00:24:12.705 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 277817 00:24:12.705 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:12.705 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.705 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 277817 00:24:12.705 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:12.705 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:12.705 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 277817' 00:24:12.705 killing process with pid 277817 00:24:12.705 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 277817 00:24:12.705 Received shutdown signal, test time was about 10.000000 seconds 00:24:12.705 00:24:12.705 Latency(us) 00:24:12.705 [2024-10-11T20:47:15.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.705 [2024-10-11T20:47:15.973Z] =================================================================================================================== 00:24:12.705 [2024-10-11T20:47:15.973Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:12.705 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 277817 00:24:12.963 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 277532 00:24:12.963 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 277532 ']' 00:24:12.963 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 277532 00:24:12.963 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:12.963 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.963 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 277532 00:24:12.963 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:12.963 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:12.963 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 277532' 00:24:12.963 killing process with pid 277532 00:24:12.963 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 277532 00:24:12.963 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 277532 00:24:13.222 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:13.222 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:13.222 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:13.222 "subsystems": [ 00:24:13.222 { 00:24:13.222 "subsystem": "keyring", 00:24:13.222 "config": [ 00:24:13.222 { 00:24:13.222 "method": "keyring_file_add_key", 00:24:13.222 "params": { 00:24:13.222 "name": "key0", 00:24:13.222 "path": "/tmp/tmp.n0H6QGcfsI" 00:24:13.222 } 00:24:13.222 } 00:24:13.222 ] 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "subsystem": "iobuf", 00:24:13.222 "config": [ 00:24:13.222 { 00:24:13.222 "method": "iobuf_set_options", 00:24:13.222 "params": { 00:24:13.222 "small_pool_count": 8192, 00:24:13.222 "large_pool_count": 1024, 00:24:13.222 "small_bufsize": 8192, 00:24:13.222 "large_bufsize": 135168 00:24:13.222 } 00:24:13.222 } 00:24:13.222 ] 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "subsystem": "sock", 00:24:13.222 "config": [ 00:24:13.222 { 00:24:13.222 "method": "sock_set_default_impl", 00:24:13.222 "params": { 00:24:13.222 "impl_name": "posix" 00:24:13.222 } 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "method": "sock_impl_set_options", 00:24:13.222 "params": { 00:24:13.222 "impl_name": "ssl", 00:24:13.222 "recv_buf_size": 4096, 00:24:13.222 "send_buf_size": 4096, 00:24:13.222 "enable_recv_pipe": true, 00:24:13.222 "enable_quickack": false, 00:24:13.222 "enable_placement_id": 0, 00:24:13.222 "enable_zerocopy_send_server": true, 00:24:13.222 "enable_zerocopy_send_client": false, 00:24:13.222 "zerocopy_threshold": 0, 00:24:13.222 "tls_version": 0, 00:24:13.222 "enable_ktls": false 00:24:13.222 } 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "method": "sock_impl_set_options", 00:24:13.222 "params": { 00:24:13.222 "impl_name": "posix", 00:24:13.222 "recv_buf_size": 2097152, 00:24:13.222 "send_buf_size": 2097152, 00:24:13.222 "enable_recv_pipe": true, 00:24:13.222 "enable_quickack": false, 00:24:13.222 "enable_placement_id": 0, 00:24:13.222 "enable_zerocopy_send_server": true, 00:24:13.222 "enable_zerocopy_send_client": false, 00:24:13.222 "zerocopy_threshold": 0, 00:24:13.222 "tls_version": 0, 00:24:13.222 "enable_ktls": false 00:24:13.222 } 00:24:13.222 } 00:24:13.222 ] 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "subsystem": "vmd", 00:24:13.222 "config": [] 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "subsystem": "accel", 00:24:13.222 "config": [ 00:24:13.222 { 00:24:13.222 "method": "accel_set_options", 00:24:13.222 "params": { 00:24:13.222 "small_cache_size": 128, 00:24:13.222 "large_cache_size": 16, 00:24:13.222 "task_count": 2048, 00:24:13.222 "sequence_count": 2048, 00:24:13.222 "buf_count": 2048 00:24:13.222 } 00:24:13.222 } 00:24:13.222 ] 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "subsystem": "bdev", 00:24:13.222 "config": [ 00:24:13.222 { 00:24:13.222 "method": "bdev_set_options", 00:24:13.222 "params": { 00:24:13.222 "bdev_io_pool_size": 65535, 00:24:13.222 "bdev_io_cache_size": 256, 00:24:13.222 "bdev_auto_examine": true, 00:24:13.222 "iobuf_small_cache_size": 128, 00:24:13.222 "iobuf_large_cache_size": 16 00:24:13.222 } 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "method": "bdev_raid_set_options", 00:24:13.222 "params": { 00:24:13.222 "process_window_size_kb": 1024, 00:24:13.222 "process_max_bandwidth_mb_sec": 0 00:24:13.222 } 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "method": "bdev_iscsi_set_options", 00:24:13.222 "params": { 00:24:13.222 "timeout_sec": 30 00:24:13.222 } 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "method": "bdev_nvme_set_options", 00:24:13.222 "params": { 00:24:13.222 "action_on_timeout": "none", 00:24:13.222 "timeout_us": 0, 00:24:13.222 "timeout_admin_us": 0, 00:24:13.222 "keep_alive_timeout_ms": 10000, 00:24:13.222 "arbitration_burst": 0, 00:24:13.222 "low_priority_weight": 0, 00:24:13.222 "medium_priority_weight": 0, 00:24:13.222 "high_priority_weight": 0, 00:24:13.222 "nvme_adminq_poll_period_us": 10000, 00:24:13.222 "nvme_ioq_poll_period_us": 0, 00:24:13.222 "io_queue_requests": 0, 00:24:13.222 "delay_cmd_submit": true, 00:24:13.222 "transport_retry_count": 4, 00:24:13.222 "bdev_retry_count": 3, 00:24:13.222 "transport_ack_timeout": 0, 00:24:13.222 "ctrlr_loss_timeout_sec": 0, 00:24:13.222 "reconnect_delay_sec": 0, 00:24:13.222 "fast_io_fail_timeout_sec": 0, 00:24:13.222 "disable_auto_failback": false, 00:24:13.222 "generate_uuids": false, 00:24:13.222 "transport_tos": 0, 00:24:13.222 "nvme_error_stat": false, 00:24:13.222 "rdma_srq_size": 0, 00:24:13.222 "io_path_stat": false, 00:24:13.222 "allow_accel_sequence": false, 00:24:13.222 "rdma_max_cq_size": 0, 00:24:13.222 "rdma_cm_event_timeout_ms": 0, 00:24:13.222 "dhchap_digests": [ 00:24:13.222 "sha256", 00:24:13.222 "sha384", 00:24:13.222 "sha512" 00:24:13.222 ], 00:24:13.222 "dhchap_dhgroups": [ 00:24:13.222 "null", 00:24:13.222 "ffdhe2048", 00:24:13.222 "ffdhe3072", 00:24:13.222 "ffdhe4096", 00:24:13.222 "ffdhe6144", 00:24:13.222 "ffdhe8192" 00:24:13.222 ] 00:24:13.222 } 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "method": "bdev_nvme_set_hotplug", 00:24:13.222 "params": { 00:24:13.222 "period_us": 100000, 00:24:13.222 "enable": false 00:24:13.222 } 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "method": "bdev_malloc_create", 00:24:13.222 "params": { 00:24:13.222 "name": "malloc0", 00:24:13.222 "num_blocks": 8192, 00:24:13.222 "block_size": 4096, 00:24:13.222 "physical_block_size": 4096, 00:24:13.222 "uuid": "32f148cf-0be3-4992-ae05-7baee4f3a755", 00:24:13.222 "optimal_io_boundary": 0, 00:24:13.222 "md_size": 0, 00:24:13.222 "dif_type": 0, 00:24:13.222 "dif_is_head_of_md": false, 00:24:13.222 "dif_pi_format": 0 00:24:13.222 } 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "method": "bdev_wait_for_examine" 00:24:13.222 } 00:24:13.222 ] 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "subsystem": "nbd", 00:24:13.222 "config": [] 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "subsystem": "scheduler", 00:24:13.222 "config": [ 00:24:13.222 { 00:24:13.222 "method": "framework_set_scheduler", 00:24:13.222 "params": { 00:24:13.222 "name": "static" 00:24:13.222 } 00:24:13.222 } 00:24:13.222 ] 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "subsystem": "nvmf", 00:24:13.222 "config": [ 00:24:13.222 { 00:24:13.222 "method": "nvmf_set_config", 00:24:13.222 "params": { 00:24:13.222 "discovery_filter": "match_any", 00:24:13.222 "admin_cmd_passthru": { 00:24:13.222 "identify_ctrlr": false 00:24:13.222 }, 00:24:13.222 "dhchap_digests": [ 00:24:13.222 "sha256", 00:24:13.222 "sha384", 00:24:13.222 "sha512" 00:24:13.222 ], 00:24:13.222 "dhchap_dhgroups": [ 00:24:13.222 "null", 00:24:13.222 "ffdhe2048", 00:24:13.222 "ffdhe3072", 00:24:13.222 "ffdhe4096", 00:24:13.222 "ffdhe6144", 00:24:13.222 "ffdhe8192" 00:24:13.222 ] 00:24:13.222 } 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "method": "nvmf_set_max_subsystems", 00:24:13.222 "params": { 00:24:13.222 "max_subsystems": 1024 00:24:13.222 } 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "method": "nvmf_set_crdt", 00:24:13.222 "params": { 00:24:13.222 "crdt1": 0, 00:24:13.222 "crdt2": 0, 00:24:13.222 "crdt3": 0 00:24:13.222 } 00:24:13.222 }, 00:24:13.222 { 00:24:13.222 "method": "nvmf_create_transport", 00:24:13.222 "params": { 00:24:13.222 "trtype": "TCP", 00:24:13.222 "max_queue_depth": 128, 00:24:13.223 "max_io_qpairs_per_ctrlr": 127, 00:24:13.223 "in_capsule_data_size": 4096, 00:24:13.223 "max_io_size": 131072, 00:24:13.223 "io_unit_size": 131072, 00:24:13.223 "max_aq_depth": 128, 00:24:13.223 "num_shared_buffers": 511, 00:24:13.223 "buf_cache_size": 4294967295, 00:24:13.223 "dif_insert_or_strip": false, 00:24:13.223 "zcopy": false, 00:24:13.223 "c2h_success": false, 00:24:13.223 "sock_priority": 0, 00:24:13.223 "abort_timeout_sec": 1, 00:24:13.223 "ack_timeout": 0, 00:24:13.223 "data_wr_pool_size": 0 00:24:13.223 } 00:24:13.223 }, 00:24:13.223 { 00:24:13.223 "method": "nvmf_create_subsystem", 00:24:13.223 "params": { 00:24:13.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.223 "allow_any_host": false, 00:24:13.223 "serial_number": "SPDK00000000000001", 00:24:13.223 "model_number": "SPDK bdev Controller", 00:24:13.223 "max_namespaces": 10, 00:24:13.223 "min_cntlid": 1, 00:24:13.223 "max_cntlid": 65519, 00:24:13.223 "ana_reporting": false 00:24:13.223 } 00:24:13.223 }, 00:24:13.223 { 00:24:13.223 "method": "nvmf_subsystem_add_host", 00:24:13.223 "params": { 00:24:13.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.223 "host": "nqn.2016-06.io.spdk:host1", 00:24:13.223 "psk": "key0" 00:24:13.223 } 00:24:13.223 }, 00:24:13.223 { 00:24:13.223 "method": "nvmf_subsystem_add_ns", 00:24:13.223 "params": { 00:24:13.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.223 "namespace": { 00:24:13.223 "nsid": 1, 00:24:13.223 "bdev_name": "malloc0", 00:24:13.223 "nguid": "32F148CF0BE34992AE057BAEE4F3A755", 00:24:13.223 "uuid": "32f148cf-0be3-4992-ae05-7baee4f3a755", 00:24:13.223 "no_auto_visible": false 00:24:13.223 } 00:24:13.223 } 00:24:13.223 }, 00:24:13.223 { 00:24:13.223 "method": "nvmf_subsystem_add_listener", 00:24:13.223 "params": { 00:24:13.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.223 "listen_address": { 00:24:13.223 "trtype": "TCP", 00:24:13.223 "adrfam": "IPv4", 00:24:13.223 "traddr": "10.0.0.2", 00:24:13.223 "trsvcid": "4420" 00:24:13.223 }, 00:24:13.223 "secure_channel": true 00:24:13.223 } 00:24:13.223 } 00:24:13.223 ] 00:24:13.223 } 00:24:13.223 ] 00:24:13.223 }' 00:24:13.223 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:13.223 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.223 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=278095 00:24:13.223 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:13.223 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 278095 00:24:13.223 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 278095 ']' 00:24:13.223 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.223 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:13.223 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.223 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:13.223 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.223 [2024-10-11 22:47:16.404654] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:13.223 [2024-10-11 22:47:16.404752] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.223 [2024-10-11 22:47:16.467502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.481 [2024-10-11 22:47:16.510801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.481 [2024-10-11 22:47:16.510865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.481 [2024-10-11 22:47:16.510879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.481 [2024-10-11 22:47:16.510890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.481 [2024-10-11 22:47:16.510909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.481 [2024-10-11 22:47:16.511497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.739 [2024-10-11 22:47:16.755406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.739 [2024-10-11 22:47:16.787429] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:13.739 [2024-10-11 22:47:16.787669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=278245 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 278245 /var/tmp/bdevperf.sock 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 278245 ']' 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.306 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:14.306 "subsystems": [ 00:24:14.306 { 00:24:14.306 "subsystem": "keyring", 00:24:14.306 "config": [ 00:24:14.306 { 00:24:14.306 "method": "keyring_file_add_key", 00:24:14.306 "params": { 00:24:14.306 "name": "key0", 00:24:14.306 "path": "/tmp/tmp.n0H6QGcfsI" 00:24:14.306 } 00:24:14.306 } 00:24:14.306 ] 00:24:14.306 }, 00:24:14.306 { 00:24:14.306 "subsystem": "iobuf", 00:24:14.306 "config": [ 00:24:14.306 { 00:24:14.306 "method": "iobuf_set_options", 00:24:14.306 "params": { 00:24:14.306 "small_pool_count": 8192, 00:24:14.306 "large_pool_count": 1024, 00:24:14.306 "small_bufsize": 8192, 00:24:14.306 "large_bufsize": 135168 00:24:14.306 } 00:24:14.306 } 00:24:14.306 ] 00:24:14.306 }, 00:24:14.306 { 00:24:14.306 "subsystem": "sock", 00:24:14.306 "config": [ 00:24:14.306 { 00:24:14.306 "method": "sock_set_default_impl", 00:24:14.306 "params": { 00:24:14.306 "impl_name": "posix" 00:24:14.306 } 00:24:14.306 }, 00:24:14.306 { 00:24:14.306 "method": "sock_impl_set_options", 00:24:14.306 "params": { 00:24:14.306 "impl_name": "ssl", 00:24:14.306 "recv_buf_size": 4096, 00:24:14.306 "send_buf_size": 4096, 00:24:14.306 "enable_recv_pipe": true, 00:24:14.306 "enable_quickack": false, 00:24:14.306 "enable_placement_id": 0, 00:24:14.306 "enable_zerocopy_send_server": true, 00:24:14.306 "enable_zerocopy_send_client": false, 00:24:14.306 "zerocopy_threshold": 0, 00:24:14.306 "tls_version": 0, 00:24:14.306 "enable_ktls": false 00:24:14.306 } 00:24:14.306 }, 00:24:14.306 { 00:24:14.306 "method": "sock_impl_set_options", 00:24:14.306 "params": { 00:24:14.306 "impl_name": "posix", 00:24:14.306 "recv_buf_size": 2097152, 00:24:14.306 "send_buf_size": 2097152, 00:24:14.306 "enable_recv_pipe": true, 00:24:14.306 "enable_quickack": false, 00:24:14.306 "enable_placement_id": 0, 00:24:14.306 "enable_zerocopy_send_server": true, 00:24:14.306 "enable_zerocopy_send_client": false, 00:24:14.306 "zerocopy_threshold": 0, 00:24:14.306 "tls_version": 0, 00:24:14.306 "enable_ktls": false 00:24:14.306 } 00:24:14.306 } 00:24:14.306 ] 00:24:14.306 }, 00:24:14.306 { 00:24:14.306 "subsystem": "vmd", 00:24:14.306 "config": [] 00:24:14.306 }, 00:24:14.306 { 00:24:14.306 "subsystem": "accel", 00:24:14.306 "config": [ 00:24:14.306 { 00:24:14.306 "method": "accel_set_options", 00:24:14.306 "params": { 00:24:14.306 "small_cache_size": 128, 00:24:14.306 "large_cache_size": 16, 00:24:14.306 "task_count": 2048, 00:24:14.306 "sequence_count": 2048, 00:24:14.306 "buf_count": 2048 00:24:14.306 } 00:24:14.306 } 00:24:14.306 ] 00:24:14.306 }, 00:24:14.306 { 00:24:14.306 "subsystem": "bdev", 00:24:14.306 "config": [ 00:24:14.306 { 00:24:14.306 "method": "bdev_set_options", 00:24:14.306 "params": { 00:24:14.306 "bdev_io_pool_size": 65535, 00:24:14.306 "bdev_io_cache_size": 256, 00:24:14.306 "bdev_auto_examine": true, 00:24:14.306 "iobuf_small_cache_size": 128, 00:24:14.306 "iobuf_large_cache_size": 16 00:24:14.306 } 00:24:14.306 }, 00:24:14.306 { 00:24:14.306 "method": "bdev_raid_set_options", 00:24:14.306 "params": { 00:24:14.306 "process_window_size_kb": 1024, 00:24:14.306 "process_max_bandwidth_mb_sec": 0 00:24:14.306 } 00:24:14.306 }, 00:24:14.306 { 00:24:14.306 "method": "bdev_iscsi_set_options", 00:24:14.306 "params": { 00:24:14.306 "timeout_sec": 30 00:24:14.306 } 00:24:14.306 }, 00:24:14.306 { 00:24:14.306 "method": "bdev_nvme_set_options", 00:24:14.306 "params": { 00:24:14.306 "action_on_timeout": "none", 00:24:14.306 "timeout_us": 0, 00:24:14.306 "timeout_admin_us": 0, 00:24:14.306 "keep_alive_timeout_ms": 10000, 00:24:14.306 "arbitration_burst": 0, 00:24:14.306 "low_priority_weight": 0, 00:24:14.306 "medium_priority_weight": 0, 00:24:14.306 "high_priority_weight": 0, 00:24:14.306 "nvme_adminq_poll_period_us": 10000, 00:24:14.306 "nvme_ioq_poll_period_us": 0, 00:24:14.306 "io_queue_requests": 512, 00:24:14.306 "delay_cmd_submit": true, 00:24:14.306 "transport_retry_count": 4, 00:24:14.306 "bdev_retry_count": 3, 00:24:14.306 "transport_ack_timeout": 0, 00:24:14.306 "ctrlr_loss_timeout_sec": 0, 00:24:14.306 "reconnect_delay_sec": 0, 00:24:14.306 "fast_io_fail_timeout_sec": 0, 00:24:14.306 "disable_auto_failback": false, 00:24:14.306 "generate_uuids": false, 00:24:14.306 "transport_tos": 0, 00:24:14.306 "nvme_error_stat": false, 00:24:14.306 "rdma_srq_size": 0, 00:24:14.306 "io_path_stat": false, 00:24:14.306 "allow_accel_sequence": false, 00:24:14.306 "rdma_max_cq_size": 0, 00:24:14.306 "rdma_cm_event_timeout_ms": 0, 00:24:14.306 "dhchap_digests": [ 00:24:14.306 "sha256", 00:24:14.306 "sha384", 00:24:14.306 "sha512" 00:24:14.306 ], 00:24:14.306 "dhchap_dhgroups": [ 00:24:14.306 "null", 00:24:14.306 "ffdhe2048", 00:24:14.306 "ffdhe3072", 00:24:14.306 "ffdhe4096", 00:24:14.307 "ffdhe6144", 00:24:14.307 "ffdhe8192" 00:24:14.307 ] 00:24:14.307 } 00:24:14.307 }, 00:24:14.307 { 00:24:14.307 "method": "bdev_nvme_attach_controller", 00:24:14.307 "params": { 00:24:14.307 "name": "TLSTEST", 00:24:14.307 "trtype": "TCP", 00:24:14.307 "adrfam": "IPv4", 00:24:14.307 "traddr": "10.0.0.2", 00:24:14.307 "trsvcid": "4420", 00:24:14.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.307 "prchk_reftag": false, 00:24:14.307 "prchk_guard": false, 00:24:14.307 "ctrlr_loss_timeout_sec": 0, 00:24:14.307 "reconnect_delay_sec": 0, 00:24:14.307 "fast_io_fail_timeout_sec": 0, 00:24:14.307 "psk": "key0", 00:24:14.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.307 "hdgst": false, 00:24:14.307 "ddgst": false, 00:24:14.307 "multipath": "multipath" 00:24:14.307 } 00:24:14.307 }, 00:24:14.307 { 00:24:14.307 "method": "bdev_nvme_set_hotplug", 00:24:14.307 "params": { 00:24:14.307 "period_us": 100000, 00:24:14.307 "enable": false 00:24:14.307 } 00:24:14.307 }, 00:24:14.307 { 00:24:14.307 "method": "bdev_wait_for_examine" 00:24:14.307 } 00:24:14.307 ] 00:24:14.307 }, 00:24:14.307 { 00:24:14.307 "subsystem": "nbd", 00:24:14.307 "config": [] 00:24:14.307 } 00:24:14.307 ] 00:24:14.307 }' 00:24:14.307 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.307 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.307 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.307 [2024-10-11 22:47:17.480082] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:14.307 [2024-10-11 22:47:17.480164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278245 ] 00:24:14.307 [2024-10-11 22:47:17.540419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.565 [2024-10-11 22:47:17.587882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.565 [2024-10-11 22:47:17.761944] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.822 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.823 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:14.823 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:14.823 Running I/O for 10 seconds... 00:24:17.128 2958.00 IOPS, 11.55 MiB/s [2024-10-11T20:47:21.329Z] 3075.50 IOPS, 12.01 MiB/s [2024-10-11T20:47:22.262Z] 3175.33 IOPS, 12.40 MiB/s [2024-10-11T20:47:23.195Z] 3228.00 IOPS, 12.61 MiB/s [2024-10-11T20:47:24.145Z] 3252.60 IOPS, 12.71 MiB/s [2024-10-11T20:47:25.078Z] 3249.00 IOPS, 12.69 MiB/s [2024-10-11T20:47:26.450Z] 3281.14 IOPS, 12.82 MiB/s [2024-10-11T20:47:27.383Z] 3280.00 IOPS, 12.81 MiB/s [2024-10-11T20:47:28.317Z] 3274.89 IOPS, 12.79 MiB/s [2024-10-11T20:47:28.317Z] 3267.60 IOPS, 12.76 MiB/s 00:24:25.049 Latency(us) 00:24:25.049 [2024-10-11T20:47:28.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.049 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:25.049 Verification LBA range: start 0x0 length 0x2000 00:24:25.049 TLSTESTn1 : 10.02 3274.18 12.79 0.00 0.00 39034.39 6602.15 64079.64 00:24:25.049 [2024-10-11T20:47:28.317Z] =================================================================================================================== 00:24:25.049 [2024-10-11T20:47:28.317Z] Total : 3274.18 12.79 0.00 0.00 39034.39 6602.15 64079.64 00:24:25.049 { 00:24:25.049 "results": [ 00:24:25.049 { 00:24:25.049 "job": "TLSTESTn1", 00:24:25.049 "core_mask": "0x4", 00:24:25.049 "workload": "verify", 00:24:25.049 "status": "finished", 00:24:25.049 "verify_range": { 00:24:25.049 "start": 0, 00:24:25.049 "length": 8192 00:24:25.049 }, 00:24:25.049 "queue_depth": 128, 00:24:25.049 "io_size": 4096, 00:24:25.049 "runtime": 10.018692, 00:24:25.049 "iops": 3274.1799029254516, 00:24:25.049 "mibps": 12.789765245802545, 00:24:25.049 "io_failed": 0, 00:24:25.049 "io_timeout": 0, 00:24:25.049 "avg_latency_us": 39034.389736620746, 00:24:25.049 "min_latency_us": 6602.145185185185, 00:24:25.049 "max_latency_us": 64079.64444444444 00:24:25.049 } 00:24:25.049 ], 00:24:25.049 "core_count": 1 00:24:25.049 } 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 278245 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 278245 ']' 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 278245 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 278245 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 278245' 00:24:25.050 killing process with pid 278245 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 278245 00:24:25.050 Received shutdown signal, test time was about 10.000000 seconds 00:24:25.050 00:24:25.050 Latency(us) 00:24:25.050 [2024-10-11T20:47:28.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.050 [2024-10-11T20:47:28.318Z] =================================================================================================================== 00:24:25.050 [2024-10-11T20:47:28.318Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 278245 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 278095 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 278095 ']' 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 278095 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 278095 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 278095' 00:24:25.050 killing process with pid 278095 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 278095 00:24:25.050 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 278095 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=279447 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 279447 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 279447 ']' 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.310 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.310 [2024-10-11 22:47:28.564231] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:25.310 [2024-10-11 22:47:28.564348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.568 [2024-10-11 22:47:28.627225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.568 [2024-10-11 22:47:28.666310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.568 [2024-10-11 22:47:28.666371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.568 [2024-10-11 22:47:28.666394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.568 [2024-10-11 22:47:28.666405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.568 [2024-10-11 22:47:28.666414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.568 [2024-10-11 22:47:28.666991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.568 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:25.568 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:25.568 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:25.568 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:25.568 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.568 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.568 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.n0H6QGcfsI 00:24:25.568 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.n0H6QGcfsI 00:24:25.569 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:25.826 [2024-10-11 22:47:29.071702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.826 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:26.392 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:26.392 [2024-10-11 22:47:29.605160] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:26.392 [2024-10-11 22:47:29.605407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.392 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:26.650 malloc0 00:24:26.650 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:27.216 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.n0H6QGcfsI 00:24:27.473 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:27.732 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=279731 00:24:27.732 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:27.732 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:27.732 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 279731 /var/tmp/bdevperf.sock 00:24:27.732 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 279731 ']' 00:24:27.732 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.732 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:27.732 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.732 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:27.732 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.732 [2024-10-11 22:47:30.845131] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:27.732 [2024-10-11 22:47:30.845223] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279731 ] 00:24:27.732 [2024-10-11 22:47:30.903406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.732 [2024-10-11 22:47:30.949641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.991 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.991 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:27.991 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n0H6QGcfsI 00:24:28.248 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:28.506 [2024-10-11 22:47:31.610050] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:28.506 nvme0n1 00:24:28.506 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:28.764 Running I/O for 1 seconds... 00:24:29.698 3132.00 IOPS, 12.23 MiB/s 00:24:29.698 Latency(us) 00:24:29.698 [2024-10-11T20:47:32.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.698 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:29.698 Verification LBA range: start 0x0 length 0x2000 00:24:29.698 nvme0n1 : 1.03 3172.95 12.39 0.00 0.00 39887.84 6262.33 41748.86 00:24:29.698 [2024-10-11T20:47:32.966Z] =================================================================================================================== 00:24:29.698 [2024-10-11T20:47:32.966Z] Total : 3172.95 12.39 0.00 0.00 39887.84 6262.33 41748.86 00:24:29.698 { 00:24:29.698 "results": [ 00:24:29.698 { 00:24:29.698 "job": "nvme0n1", 00:24:29.698 "core_mask": "0x2", 00:24:29.698 "workload": "verify", 00:24:29.698 "status": "finished", 00:24:29.698 "verify_range": { 00:24:29.698 "start": 0, 00:24:29.698 "length": 8192 00:24:29.698 }, 00:24:29.698 "queue_depth": 128, 00:24:29.698 "io_size": 4096, 00:24:29.698 "runtime": 1.027435, 00:24:29.698 "iops": 3172.9501136324925, 00:24:29.698 "mibps": 12.394336381376924, 00:24:29.698 "io_failed": 0, 00:24:29.698 "io_timeout": 0, 00:24:29.698 "avg_latency_us": 39887.843846398544, 00:24:29.698 "min_latency_us": 6262.328888888889, 00:24:29.698 "max_latency_us": 41748.85925925926 00:24:29.698 } 00:24:29.698 ], 00:24:29.698 "core_count": 1 00:24:29.698 } 00:24:29.698 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 279731 00:24:29.698 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 279731 ']' 00:24:29.698 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 279731 00:24:29.698 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:29.698 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:29.698 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 279731 00:24:29.698 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:29.698 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:29.698 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 279731' 00:24:29.698 killing process with pid 279731 00:24:29.698 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 279731 00:24:29.698 Received shutdown signal, test time was about 1.000000 seconds 00:24:29.698 00:24:29.698 Latency(us) 00:24:29.698 [2024-10-11T20:47:32.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.698 [2024-10-11T20:47:32.966Z] =================================================================================================================== 00:24:29.698 [2024-10-11T20:47:32.966Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.698 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 279731 00:24:29.956 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 279447 00:24:29.956 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 279447 ']' 00:24:29.956 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 279447 00:24:29.956 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:29.956 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:29.956 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 279447 00:24:29.956 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:29.956 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:29.956 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 279447' 00:24:29.956 killing process with pid 279447 00:24:29.956 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 279447 00:24:29.956 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 279447 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=280100 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 280100 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 280100 ']' 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.215 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.215 [2024-10-11 22:47:33.331071] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:30.215 [2024-10-11 22:47:33.331171] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.215 [2024-10-11 22:47:33.396739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.215 [2024-10-11 22:47:33.441946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.215 [2024-10-11 22:47:33.441998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.215 [2024-10-11 22:47:33.442021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.215 [2024-10-11 22:47:33.442032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.215 [2024-10-11 22:47:33.442042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.215 [2024-10-11 22:47:33.442614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.474 [2024-10-11 22:47:33.601782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.474 malloc0 00:24:30.474 [2024-10-11 22:47:33.633577] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:30.474 [2024-10-11 22:47:33.633812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=280157 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 280157 /var/tmp/bdevperf.sock 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 280157 ']' 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.474 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.474 [2024-10-11 22:47:33.703836] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:30.474 [2024-10-11 22:47:33.703926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280157 ] 00:24:30.733 [2024-10-11 22:47:33.761054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.733 [2024-10-11 22:47:33.805651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.733 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.733 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:30.733 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n0H6QGcfsI 00:24:30.991 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:31.249 [2024-10-11 22:47:34.440214] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:31.249 nvme0n1 00:24:31.507 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.507 Running I/O for 1 seconds... 00:24:32.442 3263.00 IOPS, 12.75 MiB/s 00:24:32.442 Latency(us) 00:24:32.442 [2024-10-11T20:47:35.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.442 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:32.442 Verification LBA range: start 0x0 length 0x2000 00:24:32.442 nvme0n1 : 1.04 3251.44 12.70 0.00 0.00 38645.22 11213.94 38447.79 00:24:32.442 [2024-10-11T20:47:35.710Z] =================================================================================================================== 00:24:32.442 [2024-10-11T20:47:35.710Z] Total : 3251.44 12.70 0.00 0.00 38645.22 11213.94 38447.79 00:24:32.442 { 00:24:32.442 "results": [ 00:24:32.442 { 00:24:32.442 "job": "nvme0n1", 00:24:32.442 "core_mask": "0x2", 00:24:32.442 "workload": "verify", 00:24:32.442 "status": "finished", 00:24:32.442 "verify_range": { 00:24:32.442 "start": 0, 00:24:32.442 "length": 8192 00:24:32.442 }, 00:24:32.442 "queue_depth": 128, 00:24:32.442 "io_size": 4096, 00:24:32.442 "runtime": 1.04323, 00:24:32.442 "iops": 3251.440238490074, 00:24:32.442 "mibps": 12.700938431601852, 00:24:32.442 "io_failed": 0, 00:24:32.442 "io_timeout": 0, 00:24:32.442 "avg_latency_us": 38645.21615653389, 00:24:32.442 "min_latency_us": 11213.937777777777, 00:24:32.442 "max_latency_us": 38447.78666666667 00:24:32.442 } 00:24:32.442 ], 00:24:32.442 "core_count": 1 00:24:32.442 } 00:24:32.442 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:32.442 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.442 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.701 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.701 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:32.701 "subsystems": [ 00:24:32.701 { 00:24:32.701 "subsystem": "keyring", 00:24:32.701 "config": [ 00:24:32.701 { 00:24:32.701 "method": "keyring_file_add_key", 00:24:32.701 "params": { 00:24:32.701 "name": "key0", 00:24:32.701 "path": "/tmp/tmp.n0H6QGcfsI" 00:24:32.701 } 00:24:32.701 } 00:24:32.701 ] 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "subsystem": "iobuf", 00:24:32.701 "config": [ 00:24:32.701 { 00:24:32.701 "method": "iobuf_set_options", 00:24:32.701 "params": { 00:24:32.701 "small_pool_count": 8192, 00:24:32.701 "large_pool_count": 1024, 00:24:32.701 "small_bufsize": 8192, 00:24:32.701 "large_bufsize": 135168 00:24:32.701 } 00:24:32.701 } 00:24:32.701 ] 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "subsystem": "sock", 00:24:32.701 "config": [ 00:24:32.701 { 00:24:32.701 "method": "sock_set_default_impl", 00:24:32.701 "params": { 00:24:32.701 "impl_name": "posix" 00:24:32.701 } 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "method": "sock_impl_set_options", 00:24:32.701 "params": { 00:24:32.701 "impl_name": "ssl", 00:24:32.701 "recv_buf_size": 4096, 00:24:32.701 "send_buf_size": 4096, 00:24:32.701 "enable_recv_pipe": true, 00:24:32.701 "enable_quickack": false, 00:24:32.701 "enable_placement_id": 0, 00:24:32.701 "enable_zerocopy_send_server": true, 00:24:32.701 "enable_zerocopy_send_client": false, 00:24:32.701 "zerocopy_threshold": 0, 00:24:32.701 "tls_version": 0, 00:24:32.701 "enable_ktls": false 00:24:32.701 } 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "method": "sock_impl_set_options", 00:24:32.701 "params": { 00:24:32.701 "impl_name": "posix", 00:24:32.701 "recv_buf_size": 2097152, 00:24:32.701 "send_buf_size": 2097152, 00:24:32.701 "enable_recv_pipe": true, 00:24:32.701 "enable_quickack": false, 00:24:32.701 "enable_placement_id": 0, 00:24:32.701 "enable_zerocopy_send_server": true, 00:24:32.701 "enable_zerocopy_send_client": false, 00:24:32.701 "zerocopy_threshold": 0, 00:24:32.701 "tls_version": 0, 00:24:32.701 "enable_ktls": false 00:24:32.701 } 00:24:32.701 } 00:24:32.701 ] 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "subsystem": "vmd", 00:24:32.701 "config": [] 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "subsystem": "accel", 00:24:32.701 "config": [ 00:24:32.701 { 00:24:32.701 "method": "accel_set_options", 00:24:32.701 "params": { 00:24:32.701 "small_cache_size": 128, 00:24:32.701 "large_cache_size": 16, 00:24:32.701 "task_count": 2048, 00:24:32.701 "sequence_count": 2048, 00:24:32.701 "buf_count": 2048 00:24:32.701 } 00:24:32.701 } 00:24:32.701 ] 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "subsystem": "bdev", 00:24:32.701 "config": [ 00:24:32.701 { 00:24:32.701 "method": "bdev_set_options", 00:24:32.701 "params": { 00:24:32.701 "bdev_io_pool_size": 65535, 00:24:32.701 "bdev_io_cache_size": 256, 00:24:32.701 "bdev_auto_examine": true, 00:24:32.701 "iobuf_small_cache_size": 128, 00:24:32.701 "iobuf_large_cache_size": 16 00:24:32.701 } 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "method": "bdev_raid_set_options", 00:24:32.701 "params": { 00:24:32.701 "process_window_size_kb": 1024, 00:24:32.701 "process_max_bandwidth_mb_sec": 0 00:24:32.701 } 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "method": "bdev_iscsi_set_options", 00:24:32.701 "params": { 00:24:32.701 "timeout_sec": 30 00:24:32.701 } 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "method": "bdev_nvme_set_options", 00:24:32.701 "params": { 00:24:32.701 "action_on_timeout": "none", 00:24:32.701 "timeout_us": 0, 00:24:32.701 "timeout_admin_us": 0, 00:24:32.701 "keep_alive_timeout_ms": 10000, 00:24:32.701 "arbitration_burst": 0, 00:24:32.701 "low_priority_weight": 0, 00:24:32.701 "medium_priority_weight": 0, 00:24:32.701 "high_priority_weight": 0, 00:24:32.701 "nvme_adminq_poll_period_us": 10000, 00:24:32.701 "nvme_ioq_poll_period_us": 0, 00:24:32.701 "io_queue_requests": 0, 00:24:32.701 "delay_cmd_submit": true, 00:24:32.701 "transport_retry_count": 4, 00:24:32.701 "bdev_retry_count": 3, 00:24:32.701 "transport_ack_timeout": 0, 00:24:32.701 "ctrlr_loss_timeout_sec": 0, 00:24:32.701 "reconnect_delay_sec": 0, 00:24:32.701 "fast_io_fail_timeout_sec": 0, 00:24:32.701 "disable_auto_failback": false, 00:24:32.701 "generate_uuids": false, 00:24:32.701 "transport_tos": 0, 00:24:32.701 "nvme_error_stat": false, 00:24:32.701 "rdma_srq_size": 0, 00:24:32.701 "io_path_stat": false, 00:24:32.701 "allow_accel_sequence": false, 00:24:32.701 "rdma_max_cq_size": 0, 00:24:32.701 "rdma_cm_event_timeout_ms": 0, 00:24:32.701 "dhchap_digests": [ 00:24:32.701 "sha256", 00:24:32.701 "sha384", 00:24:32.701 "sha512" 00:24:32.701 ], 00:24:32.701 "dhchap_dhgroups": [ 00:24:32.701 "null", 00:24:32.701 "ffdhe2048", 00:24:32.701 "ffdhe3072", 00:24:32.701 "ffdhe4096", 00:24:32.701 "ffdhe6144", 00:24:32.701 "ffdhe8192" 00:24:32.701 ] 00:24:32.701 } 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "method": "bdev_nvme_set_hotplug", 00:24:32.701 "params": { 00:24:32.701 "period_us": 100000, 00:24:32.701 "enable": false 00:24:32.701 } 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "method": "bdev_malloc_create", 00:24:32.701 "params": { 00:24:32.701 "name": "malloc0", 00:24:32.701 "num_blocks": 8192, 00:24:32.701 "block_size": 4096, 00:24:32.701 "physical_block_size": 4096, 00:24:32.701 "uuid": "e971d29d-6620-464f-bd94-7b8eb0901628", 00:24:32.701 "optimal_io_boundary": 0, 00:24:32.701 "md_size": 0, 00:24:32.701 "dif_type": 0, 00:24:32.701 "dif_is_head_of_md": false, 00:24:32.701 "dif_pi_format": 0 00:24:32.701 } 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "method": "bdev_wait_for_examine" 00:24:32.701 } 00:24:32.701 ] 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "subsystem": "nbd", 00:24:32.701 "config": [] 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "subsystem": "scheduler", 00:24:32.701 "config": [ 00:24:32.701 { 00:24:32.701 "method": "framework_set_scheduler", 00:24:32.701 "params": { 00:24:32.701 "name": "static" 00:24:32.701 } 00:24:32.701 } 00:24:32.701 ] 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "subsystem": "nvmf", 00:24:32.701 "config": [ 00:24:32.701 { 00:24:32.701 "method": "nvmf_set_config", 00:24:32.701 "params": { 00:24:32.701 "discovery_filter": "match_any", 00:24:32.701 "admin_cmd_passthru": { 00:24:32.701 "identify_ctrlr": false 00:24:32.701 }, 00:24:32.701 "dhchap_digests": [ 00:24:32.701 "sha256", 00:24:32.702 "sha384", 00:24:32.702 "sha512" 00:24:32.702 ], 00:24:32.702 "dhchap_dhgroups": [ 00:24:32.702 "null", 00:24:32.702 "ffdhe2048", 00:24:32.702 "ffdhe3072", 00:24:32.702 "ffdhe4096", 00:24:32.702 "ffdhe6144", 00:24:32.702 "ffdhe8192" 00:24:32.702 ] 00:24:32.702 } 00:24:32.702 }, 00:24:32.702 { 00:24:32.702 "method": "nvmf_set_max_subsystems", 00:24:32.702 "params": { 00:24:32.702 "max_subsystems": 1024 00:24:32.702 } 00:24:32.702 }, 00:24:32.702 { 00:24:32.702 "method": "nvmf_set_crdt", 00:24:32.702 "params": { 00:24:32.702 "crdt1": 0, 00:24:32.702 "crdt2": 0, 00:24:32.702 "crdt3": 0 00:24:32.702 } 00:24:32.702 }, 00:24:32.702 { 00:24:32.702 "method": "nvmf_create_transport", 00:24:32.702 "params": { 00:24:32.702 "trtype": "TCP", 00:24:32.702 "max_queue_depth": 128, 00:24:32.702 "max_io_qpairs_per_ctrlr": 127, 00:24:32.702 "in_capsule_data_size": 4096, 00:24:32.702 "max_io_size": 131072, 00:24:32.702 "io_unit_size": 131072, 00:24:32.702 "max_aq_depth": 128, 00:24:32.702 "num_shared_buffers": 511, 00:24:32.702 "buf_cache_size": 4294967295, 00:24:32.702 "dif_insert_or_strip": false, 00:24:32.702 "zcopy": false, 00:24:32.702 "c2h_success": false, 00:24:32.702 "sock_priority": 0, 00:24:32.702 "abort_timeout_sec": 1, 00:24:32.702 "ack_timeout": 0, 00:24:32.702 "data_wr_pool_size": 0 00:24:32.702 } 00:24:32.702 }, 00:24:32.702 { 00:24:32.702 "method": "nvmf_create_subsystem", 00:24:32.702 "params": { 00:24:32.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.702 "allow_any_host": false, 00:24:32.702 "serial_number": "00000000000000000000", 00:24:32.702 "model_number": "SPDK bdev Controller", 00:24:32.702 "max_namespaces": 32, 00:24:32.702 "min_cntlid": 1, 00:24:32.702 "max_cntlid": 65519, 00:24:32.702 "ana_reporting": false 00:24:32.702 } 00:24:32.702 }, 00:24:32.702 { 00:24:32.702 "method": "nvmf_subsystem_add_host", 00:24:32.702 "params": { 00:24:32.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.702 "host": "nqn.2016-06.io.spdk:host1", 00:24:32.702 "psk": "key0" 00:24:32.702 } 00:24:32.702 }, 00:24:32.702 { 00:24:32.702 "method": "nvmf_subsystem_add_ns", 00:24:32.702 "params": { 00:24:32.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.702 "namespace": { 00:24:32.702 "nsid": 1, 00:24:32.702 "bdev_name": "malloc0", 00:24:32.702 "nguid": "E971D29D6620464FBD947B8EB0901628", 00:24:32.702 "uuid": "e971d29d-6620-464f-bd94-7b8eb0901628", 00:24:32.702 "no_auto_visible": false 00:24:32.702 } 00:24:32.702 } 00:24:32.702 }, 00:24:32.702 { 00:24:32.702 "method": "nvmf_subsystem_add_listener", 00:24:32.702 "params": { 00:24:32.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.702 "listen_address": { 00:24:32.702 "trtype": "TCP", 00:24:32.702 "adrfam": "IPv4", 00:24:32.702 "traddr": "10.0.0.2", 00:24:32.702 "trsvcid": "4420" 00:24:32.702 }, 00:24:32.702 "secure_channel": false, 00:24:32.702 "sock_impl": "ssl" 00:24:32.702 } 00:24:32.702 } 00:24:32.702 ] 00:24:32.702 } 00:24:32.702 ] 00:24:32.702 }' 00:24:32.702 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:32.960 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:32.960 "subsystems": [ 00:24:32.960 { 00:24:32.960 "subsystem": "keyring", 00:24:32.960 "config": [ 00:24:32.960 { 00:24:32.960 "method": "keyring_file_add_key", 00:24:32.960 "params": { 00:24:32.960 "name": "key0", 00:24:32.960 "path": "/tmp/tmp.n0H6QGcfsI" 00:24:32.960 } 00:24:32.960 } 00:24:32.960 ] 00:24:32.960 }, 00:24:32.960 { 00:24:32.960 "subsystem": "iobuf", 00:24:32.960 "config": [ 00:24:32.960 { 00:24:32.960 "method": "iobuf_set_options", 00:24:32.960 "params": { 00:24:32.960 "small_pool_count": 8192, 00:24:32.960 "large_pool_count": 1024, 00:24:32.960 "small_bufsize": 8192, 00:24:32.960 "large_bufsize": 135168 00:24:32.961 } 00:24:32.961 } 00:24:32.961 ] 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "subsystem": "sock", 00:24:32.961 "config": [ 00:24:32.961 { 00:24:32.961 "method": "sock_set_default_impl", 00:24:32.961 "params": { 00:24:32.961 "impl_name": "posix" 00:24:32.961 } 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "method": "sock_impl_set_options", 00:24:32.961 "params": { 00:24:32.961 "impl_name": "ssl", 00:24:32.961 "recv_buf_size": 4096, 00:24:32.961 "send_buf_size": 4096, 00:24:32.961 "enable_recv_pipe": true, 00:24:32.961 "enable_quickack": false, 00:24:32.961 "enable_placement_id": 0, 00:24:32.961 "enable_zerocopy_send_server": true, 00:24:32.961 "enable_zerocopy_send_client": false, 00:24:32.961 "zerocopy_threshold": 0, 00:24:32.961 "tls_version": 0, 00:24:32.961 "enable_ktls": false 00:24:32.961 } 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "method": "sock_impl_set_options", 00:24:32.961 "params": { 00:24:32.961 "impl_name": "posix", 00:24:32.961 "recv_buf_size": 2097152, 00:24:32.961 "send_buf_size": 2097152, 00:24:32.961 "enable_recv_pipe": true, 00:24:32.961 "enable_quickack": false, 00:24:32.961 "enable_placement_id": 0, 00:24:32.961 "enable_zerocopy_send_server": true, 00:24:32.961 "enable_zerocopy_send_client": false, 00:24:32.961 "zerocopy_threshold": 0, 00:24:32.961 "tls_version": 0, 00:24:32.961 "enable_ktls": false 00:24:32.961 } 00:24:32.961 } 00:24:32.961 ] 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "subsystem": "vmd", 00:24:32.961 "config": [] 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "subsystem": "accel", 00:24:32.961 "config": [ 00:24:32.961 { 00:24:32.961 "method": "accel_set_options", 00:24:32.961 "params": { 00:24:32.961 "small_cache_size": 128, 00:24:32.961 "large_cache_size": 16, 00:24:32.961 "task_count": 2048, 00:24:32.961 "sequence_count": 2048, 00:24:32.961 "buf_count": 2048 00:24:32.961 } 00:24:32.961 } 00:24:32.961 ] 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "subsystem": "bdev", 00:24:32.961 "config": [ 00:24:32.961 { 00:24:32.961 "method": "bdev_set_options", 00:24:32.961 "params": { 00:24:32.961 "bdev_io_pool_size": 65535, 00:24:32.961 "bdev_io_cache_size": 256, 00:24:32.961 "bdev_auto_examine": true, 00:24:32.961 "iobuf_small_cache_size": 128, 00:24:32.961 "iobuf_large_cache_size": 16 00:24:32.961 } 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "method": "bdev_raid_set_options", 00:24:32.961 "params": { 00:24:32.961 "process_window_size_kb": 1024, 00:24:32.961 "process_max_bandwidth_mb_sec": 0 00:24:32.961 } 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "method": "bdev_iscsi_set_options", 00:24:32.961 "params": { 00:24:32.961 "timeout_sec": 30 00:24:32.961 } 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "method": "bdev_nvme_set_options", 00:24:32.961 "params": { 00:24:32.961 "action_on_timeout": "none", 00:24:32.961 "timeout_us": 0, 00:24:32.961 "timeout_admin_us": 0, 00:24:32.961 "keep_alive_timeout_ms": 10000, 00:24:32.961 "arbitration_burst": 0, 00:24:32.961 "low_priority_weight": 0, 00:24:32.961 "medium_priority_weight": 0, 00:24:32.961 "high_priority_weight": 0, 00:24:32.961 "nvme_adminq_poll_period_us": 10000, 00:24:32.961 "nvme_ioq_poll_period_us": 0, 00:24:32.961 "io_queue_requests": 512, 00:24:32.961 "delay_cmd_submit": true, 00:24:32.961 "transport_retry_count": 4, 00:24:32.961 "bdev_retry_count": 3, 00:24:32.961 "transport_ack_timeout": 0, 00:24:32.961 "ctrlr_loss_timeout_sec": 0, 00:24:32.961 "reconnect_delay_sec": 0, 00:24:32.961 "fast_io_fail_timeout_sec": 0, 00:24:32.961 "disable_auto_failback": false, 00:24:32.961 "generate_uuids": false, 00:24:32.961 "transport_tos": 0, 00:24:32.961 "nvme_error_stat": false, 00:24:32.961 "rdma_srq_size": 0, 00:24:32.961 "io_path_stat": false, 00:24:32.961 "allow_accel_sequence": false, 00:24:32.961 "rdma_max_cq_size": 0, 00:24:32.961 "rdma_cm_event_timeout_ms": 0, 00:24:32.961 "dhchap_digests": [ 00:24:32.961 "sha256", 00:24:32.961 "sha384", 00:24:32.961 "sha512" 00:24:32.961 ], 00:24:32.961 "dhchap_dhgroups": [ 00:24:32.961 "null", 00:24:32.961 "ffdhe2048", 00:24:32.961 "ffdhe3072", 00:24:32.961 "ffdhe4096", 00:24:32.961 "ffdhe6144", 00:24:32.961 "ffdhe8192" 00:24:32.961 ] 00:24:32.961 } 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "method": "bdev_nvme_attach_controller", 00:24:32.961 "params": { 00:24:32.961 "name": "nvme0", 00:24:32.961 "trtype": "TCP", 00:24:32.961 "adrfam": "IPv4", 00:24:32.961 "traddr": "10.0.0.2", 00:24:32.961 "trsvcid": "4420", 00:24:32.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.961 "prchk_reftag": false, 00:24:32.961 "prchk_guard": false, 00:24:32.961 "ctrlr_loss_timeout_sec": 0, 00:24:32.961 "reconnect_delay_sec": 0, 00:24:32.961 "fast_io_fail_timeout_sec": 0, 00:24:32.961 "psk": "key0", 00:24:32.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.961 "hdgst": false, 00:24:32.961 "ddgst": false, 00:24:32.961 "multipath": "multipath" 00:24:32.961 } 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "method": "bdev_nvme_set_hotplug", 00:24:32.961 "params": { 00:24:32.961 "period_us": 100000, 00:24:32.961 "enable": false 00:24:32.961 } 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "method": "bdev_enable_histogram", 00:24:32.961 "params": { 00:24:32.961 "name": "nvme0n1", 00:24:32.961 "enable": true 00:24:32.961 } 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "method": "bdev_wait_for_examine" 00:24:32.961 } 00:24:32.961 ] 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "subsystem": "nbd", 00:24:32.961 "config": [] 00:24:32.961 } 00:24:32.961 ] 00:24:32.961 }' 00:24:32.961 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 280157 00:24:32.961 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 280157 ']' 00:24:32.961 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 280157 00:24:32.961 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:32.961 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:32.961 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 280157 00:24:32.961 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:32.961 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:32.961 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 280157' 00:24:32.961 killing process with pid 280157 00:24:32.961 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 280157 00:24:32.961 Received shutdown signal, test time was about 1.000000 seconds 00:24:32.961 00:24:32.961 Latency(us) 00:24:32.961 [2024-10-11T20:47:36.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.961 [2024-10-11T20:47:36.229Z] =================================================================================================================== 00:24:32.961 [2024-10-11T20:47:36.229Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.961 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 280157 00:24:33.220 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 280100 00:24:33.220 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 280100 ']' 00:24:33.220 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 280100 00:24:33.220 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:33.220 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.220 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 280100 00:24:33.220 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:33.220 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:33.220 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 280100' 00:24:33.220 killing process with pid 280100 00:24:33.220 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 280100 00:24:33.220 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 280100 00:24:33.478 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:33.478 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:33.478 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:33.479 "subsystems": [ 00:24:33.479 { 00:24:33.479 "subsystem": "keyring", 00:24:33.479 "config": [ 00:24:33.479 { 00:24:33.479 "method": "keyring_file_add_key", 00:24:33.479 "params": { 00:24:33.479 "name": "key0", 00:24:33.479 "path": "/tmp/tmp.n0H6QGcfsI" 00:24:33.479 } 00:24:33.479 } 00:24:33.479 ] 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "subsystem": "iobuf", 00:24:33.479 "config": [ 00:24:33.479 { 00:24:33.479 "method": "iobuf_set_options", 00:24:33.479 "params": { 00:24:33.479 "small_pool_count": 8192, 00:24:33.479 "large_pool_count": 1024, 00:24:33.479 "small_bufsize": 8192, 00:24:33.479 "large_bufsize": 135168 00:24:33.479 } 00:24:33.479 } 00:24:33.479 ] 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "subsystem": "sock", 00:24:33.479 "config": [ 00:24:33.479 { 00:24:33.479 "method": "sock_set_default_impl", 00:24:33.479 "params": { 00:24:33.479 "impl_name": "posix" 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "sock_impl_set_options", 00:24:33.479 "params": { 00:24:33.479 "impl_name": "ssl", 00:24:33.479 "recv_buf_size": 4096, 00:24:33.479 "send_buf_size": 4096, 00:24:33.479 "enable_recv_pipe": true, 00:24:33.479 "enable_quickack": false, 00:24:33.479 "enable_placement_id": 0, 00:24:33.479 "enable_zerocopy_send_server": true, 00:24:33.479 "enable_zerocopy_send_client": false, 00:24:33.479 "zerocopy_threshold": 0, 00:24:33.479 "tls_version": 0, 00:24:33.479 "enable_ktls": false 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "sock_impl_set_options", 00:24:33.479 "params": { 00:24:33.479 "impl_name": "posix", 00:24:33.479 "recv_buf_size": 2097152, 00:24:33.479 "send_buf_size": 2097152, 00:24:33.479 "enable_recv_pipe": true, 00:24:33.479 "enable_quickack": false, 00:24:33.479 "enable_placement_id": 0, 00:24:33.479 "enable_zerocopy_send_server": true, 00:24:33.479 "enable_zerocopy_send_client": false, 00:24:33.479 "zerocopy_threshold": 0, 00:24:33.479 "tls_version": 0, 00:24:33.479 "enable_ktls": false 00:24:33.479 } 00:24:33.479 } 00:24:33.479 ] 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "subsystem": "vmd", 00:24:33.479 "config": [] 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "subsystem": "accel", 00:24:33.479 "config": [ 00:24:33.479 { 00:24:33.479 "method": "accel_set_options", 00:24:33.479 "params": { 00:24:33.479 "small_cache_size": 128, 00:24:33.479 "large_cache_size": 16, 00:24:33.479 "task_count": 2048, 00:24:33.479 "sequence_count": 2048, 00:24:33.479 "buf_count": 2048 00:24:33.479 } 00:24:33.479 } 00:24:33.479 ] 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "subsystem": "bdev", 00:24:33.479 "config": [ 00:24:33.479 { 00:24:33.479 "method": "bdev_set_options", 00:24:33.479 "params": { 00:24:33.479 "bdev_io_pool_size": 65535, 00:24:33.479 "bdev_io_cache_size": 256, 00:24:33.479 "bdev_auto_examine": true, 00:24:33.479 "iobuf_small_cache_size": 128, 00:24:33.479 "iobuf_large_cache_size": 16 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "bdev_raid_set_options", 00:24:33.479 "params": { 00:24:33.479 "process_window_size_kb": 1024, 00:24:33.479 "process_max_bandwidth_mb_sec": 0 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "bdev_iscsi_set_options", 00:24:33.479 "params": { 00:24:33.479 "timeout_sec": 30 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "bdev_nvme_set_options", 00:24:33.479 "params": { 00:24:33.479 "action_on_timeout": "none", 00:24:33.479 "timeout_us": 0, 00:24:33.479 "timeout_admin_us": 0, 00:24:33.479 "keep_alive_timeout_ms": 10000, 00:24:33.479 "arbitration_burst": 0, 00:24:33.479 "low_priority_weight": 0, 00:24:33.479 "medium_priority_weight": 0, 00:24:33.479 "high_priority_weight": 0, 00:24:33.479 "nvme_adminq_poll_period_us": 10000, 00:24:33.479 "nvme_ioq_poll_period_us": 0, 00:24:33.479 "io_queue_requests": 0, 00:24:33.479 "delay_cmd_submit": true, 00:24:33.479 "transport_retry_count": 4, 00:24:33.479 "bdev_retry_count": 3, 00:24:33.479 "transport_ack_timeout": 0, 00:24:33.479 "ctrlr_loss_timeout_sec": 0, 00:24:33.479 "reconnect_delay_sec": 0, 00:24:33.479 "fast_io_fail_timeout_sec": 0, 00:24:33.479 "disable_auto_failback": false, 00:24:33.479 "generate_uuids": false, 00:24:33.479 "transport_tos": 0, 00:24:33.479 "nvme_error_stat": false, 00:24:33.479 "rdma_srq_size": 0, 00:24:33.479 "io_path_stat": false, 00:24:33.479 "allow_accel_sequence": false, 00:24:33.479 "rdma_max_cq_size": 0, 00:24:33.479 "rdma_cm_event_timeout_ms": 0, 00:24:33.479 "dhchap_digests": [ 00:24:33.479 "sha256", 00:24:33.479 "sha384", 00:24:33.479 "sha512" 00:24:33.479 ], 00:24:33.479 "dhchap_dhgroups": [ 00:24:33.479 "null", 00:24:33.479 "ffdhe2048", 00:24:33.479 "ffdhe3072", 00:24:33.479 "ffdhe4096", 00:24:33.479 "ffdhe6144", 00:24:33.479 "ffdhe8192" 00:24:33.479 ] 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "bdev_nvme_set_hotplug", 00:24:33.479 "params": { 00:24:33.479 "period_us": 100000, 00:24:33.479 "enable": false 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "bdev_malloc_create", 00:24:33.479 "params": { 00:24:33.479 "name": "malloc0", 00:24:33.479 "num_blocks": 8192, 00:24:33.479 "block_size": 4096, 00:24:33.479 "physical_block_size": 4096, 00:24:33.479 "uuid": "e971d29d-6620-464f-bd94-7b8eb0901628", 00:24:33.479 "optimal_io_boundary": 0, 00:24:33.479 "md_size": 0, 00:24:33.479 "dif_type": 0, 00:24:33.479 "dif_is_head_of_md": false, 00:24:33.479 "dif_pi_format": 0 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "bdev_wait_for_examine" 00:24:33.479 } 00:24:33.479 ] 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "subsystem": "nbd", 00:24:33.479 "config": [] 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "subsystem": "scheduler", 00:24:33.479 "config": [ 00:24:33.479 { 00:24:33.479 "method": "framework_set_scheduler", 00:24:33.479 "params": { 00:24:33.479 "name": "static" 00:24:33.479 } 00:24:33.479 } 00:24:33.479 ] 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "subsystem": "nvmf", 00:24:33.479 "config": [ 00:24:33.479 { 00:24:33.479 "method": "nvmf_set_config", 00:24:33.479 "params": { 00:24:33.479 "discovery_filter": "match_any", 00:24:33.479 "admin_cmd_passthru": { 00:24:33.479 "identify_ctrlr": false 00:24:33.479 }, 00:24:33.479 "dhchap_digests": [ 00:24:33.479 "sha256", 00:24:33.479 "sha384", 00:24:33.479 "sha512" 00:24:33.479 ], 00:24:33.479 "dhchap_dhgroups": [ 00:24:33.479 "null", 00:24:33.479 "ffdhe2048", 00:24:33.479 "ffdhe3072", 00:24:33.479 "ffdhe4096", 00:24:33.479 "ffdhe6144", 00:24:33.479 "ffdhe8192" 00:24:33.479 ] 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "nvmf_set_max_subsystems", 00:24:33.479 "params": { 00:24:33.479 "max_subsystems": 1024 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "nvmf_set_crdt", 00:24:33.479 "params": { 00:24:33.479 "crdt1": 0, 00:24:33.479 "crdt2": 0, 00:24:33.479 "crdt3": 0 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "nvmf_create_transport", 00:24:33.479 "params": { 00:24:33.479 "trtype": "TCP", 00:24:33.479 "max_queue_depth": 128, 00:24:33.479 "max_io_qpairs_per_ctrlr": 127, 00:24:33.479 "in_capsule_data_size": 4096, 00:24:33.479 "max_io_size": 131072, 00:24:33.479 "io_unit_size": 131072, 00:24:33.479 "max_aq_depth": 128, 00:24:33.479 "num_shared_buffers": 511, 00:24:33.479 "buf_cache_size": 4294967295, 00:24:33.479 "dif_insert_or_strip": false, 00:24:33.479 "zcopy": false, 00:24:33.479 "c2h_success": false, 00:24:33.479 "sock_priority": 0, 00:24:33.479 "abort_timeout_sec": 1, 00:24:33.479 "ack_timeout": 0, 00:24:33.479 "data_wr_pool_size": 0 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "nvmf_create_subsystem", 00:24:33.479 "params": { 00:24:33.479 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.479 "allow_any_host": false, 00:24:33.479 "serial_number": "00000000000000000000", 00:24:33.479 "model_number": "SPDK bdev Controller", 00:24:33.479 "max_namespaces": 32, 00:24:33.479 "min_cntlid": 1, 00:24:33.479 "max_cntlid": 65519, 00:24:33.479 "ana_reporting": false 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "nvmf_subsystem_add_host", 00:24:33.479 "params": { 00:24:33.479 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.479 "host": "nqn.2016-06.io.spdk:host1", 00:24:33.479 "psk": "key0" 00:24:33.479 } 00:24:33.479 }, 00:24:33.479 { 00:24:33.479 "method": "nvmf_subsystem_add_ns", 00:24:33.479 "params": { 00:24:33.479 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.479 "namespace": { 00:24:33.479 "nsid": 1, 00:24:33.479 "bdev_name": "malloc0", 00:24:33.479 "nguid": "E971D29D6620464FBD947B8EB0901628", 00:24:33.480 "uuid": "e971d29d-6620-464f-bd94-7b8eb0901628", 00:24:33.480 "no_auto_visible": false 00:24:33.480 } 00:24:33.480 } 00:24:33.480 }, 00:24:33.480 { 00:24:33.480 "method": "nvmf_subsystem_add_listener", 00:24:33.480 "params": { 00:24:33.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.480 "listen_address": { 00:24:33.480 "trtype": "TCP", 00:24:33.480 "adrfam": "IPv4", 00:24:33.480 "traddr": "10.0.0.2", 00:24:33.480 "trsvcid": "4420" 00:24:33.480 }, 00:24:33.480 "secure_channel": false, 00:24:33.480 "sock_impl": "ssl" 00:24:33.480 } 00:24:33.480 } 00:24:33.480 ] 00:24:33.480 } 00:24:33.480 ] 00:24:33.480 }' 00:24:33.480 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.480 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.480 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=280446 00:24:33.480 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:33.480 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 280446 00:24:33.480 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 280446 ']' 00:24:33.480 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.480 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.480 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.480 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.480 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.480 [2024-10-11 22:47:36.632099] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:33.480 [2024-10-11 22:47:36.632194] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.480 [2024-10-11 22:47:36.699051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.480 [2024-10-11 22:47:36.745617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.480 [2024-10-11 22:47:36.745682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.480 [2024-10-11 22:47:36.745706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.480 [2024-10-11 22:47:36.745723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.480 [2024-10-11 22:47:36.745732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.480 [2024-10-11 22:47:36.746417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.739 [2024-10-11 22:47:36.976026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.997 [2024-10-11 22:47:37.008069] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:33.997 [2024-10-11 22:47:37.008328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=280600 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 280600 /var/tmp/bdevperf.sock 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 280600 ']' 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:34.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.563 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:34.563 "subsystems": [ 00:24:34.563 { 00:24:34.563 "subsystem": "keyring", 00:24:34.563 "config": [ 00:24:34.563 { 00:24:34.563 "method": "keyring_file_add_key", 00:24:34.563 "params": { 00:24:34.563 "name": "key0", 00:24:34.563 "path": "/tmp/tmp.n0H6QGcfsI" 00:24:34.563 } 00:24:34.563 } 00:24:34.563 ] 00:24:34.563 }, 00:24:34.563 { 00:24:34.563 "subsystem": "iobuf", 00:24:34.563 "config": [ 00:24:34.563 { 00:24:34.563 "method": "iobuf_set_options", 00:24:34.563 "params": { 00:24:34.563 "small_pool_count": 8192, 00:24:34.563 "large_pool_count": 1024, 00:24:34.563 "small_bufsize": 8192, 00:24:34.563 "large_bufsize": 135168 00:24:34.563 } 00:24:34.563 } 00:24:34.563 ] 00:24:34.563 }, 00:24:34.563 { 00:24:34.563 "subsystem": "sock", 00:24:34.564 "config": [ 00:24:34.564 { 00:24:34.564 "method": "sock_set_default_impl", 00:24:34.564 "params": { 00:24:34.564 "impl_name": "posix" 00:24:34.564 } 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "method": "sock_impl_set_options", 00:24:34.564 "params": { 00:24:34.564 "impl_name": "ssl", 00:24:34.564 "recv_buf_size": 4096, 00:24:34.564 "send_buf_size": 4096, 00:24:34.564 "enable_recv_pipe": true, 00:24:34.564 "enable_quickack": false, 00:24:34.564 "enable_placement_id": 0, 00:24:34.564 "enable_zerocopy_send_server": true, 00:24:34.564 "enable_zerocopy_send_client": false, 00:24:34.564 "zerocopy_threshold": 0, 00:24:34.564 "tls_version": 0, 00:24:34.564 "enable_ktls": false 00:24:34.564 } 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "method": "sock_impl_set_options", 00:24:34.564 "params": { 00:24:34.564 "impl_name": "posix", 00:24:34.564 "recv_buf_size": 2097152, 00:24:34.564 "send_buf_size": 2097152, 00:24:34.564 "enable_recv_pipe": true, 00:24:34.564 "enable_quickack": false, 00:24:34.564 "enable_placement_id": 0, 00:24:34.564 "enable_zerocopy_send_server": true, 00:24:34.564 "enable_zerocopy_send_client": false, 00:24:34.564 "zerocopy_threshold": 0, 00:24:34.564 "tls_version": 0, 00:24:34.564 "enable_ktls": false 00:24:34.564 } 00:24:34.564 } 00:24:34.564 ] 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "subsystem": "vmd", 00:24:34.564 "config": [] 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "subsystem": "accel", 00:24:34.564 "config": [ 00:24:34.564 { 00:24:34.564 "method": "accel_set_options", 00:24:34.564 "params": { 00:24:34.564 "small_cache_size": 128, 00:24:34.564 "large_cache_size": 16, 00:24:34.564 "task_count": 2048, 00:24:34.564 "sequence_count": 2048, 00:24:34.564 "buf_count": 2048 00:24:34.564 } 00:24:34.564 } 00:24:34.564 ] 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "subsystem": "bdev", 00:24:34.564 "config": [ 00:24:34.564 { 00:24:34.564 "method": "bdev_set_options", 00:24:34.564 "params": { 00:24:34.564 "bdev_io_pool_size": 65535, 00:24:34.564 "bdev_io_cache_size": 256, 00:24:34.564 "bdev_auto_examine": true, 00:24:34.564 "iobuf_small_cache_size": 128, 00:24:34.564 "iobuf_large_cache_size": 16 00:24:34.564 } 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "method": "bdev_raid_set_options", 00:24:34.564 "params": { 00:24:34.564 "process_window_size_kb": 1024, 00:24:34.564 "process_max_bandwidth_mb_sec": 0 00:24:34.564 } 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "method": "bdev_iscsi_set_options", 00:24:34.564 "params": { 00:24:34.564 "timeout_sec": 30 00:24:34.564 } 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "method": "bdev_nvme_set_options", 00:24:34.564 "params": { 00:24:34.564 "action_on_timeout": "none", 00:24:34.564 "timeout_us": 0, 00:24:34.564 "timeout_admin_us": 0, 00:24:34.564 "keep_alive_timeout_ms": 10000, 00:24:34.564 "arbitration_burst": 0, 00:24:34.564 "low_priority_weight": 0, 00:24:34.564 "medium_priority_weight": 0, 00:24:34.564 "high_priority_weight": 0, 00:24:34.564 "nvme_adminq_poll_period_us": 10000, 00:24:34.564 "nvme_ioq_poll_period_us": 0, 00:24:34.564 "io_queue_requests": 512, 00:24:34.564 "delay_cmd_submit": true, 00:24:34.564 "transport_retry_count": 4, 00:24:34.564 "bdev_retry_count": 3, 00:24:34.564 "transport_ack_timeout": 0, 00:24:34.564 "ctrlr_loss_timeout_sec": 0, 00:24:34.564 "reconnect_delay_sec": 0, 00:24:34.564 "fast_io_fail_timeout_sec": 0, 00:24:34.564 "disable_auto_failback": false, 00:24:34.564 "generate_uuids": false, 00:24:34.564 "transport_tos": 0, 00:24:34.564 "nvme_error_stat": false, 00:24:34.564 "rdma_srq_size": 0, 00:24:34.564 "io_path_stat": false, 00:24:34.564 "allow_accel_sequence": false, 00:24:34.564 "rdma_max_cq_size": 0, 00:24:34.564 "rdma_cm_event_timeout_ms": 0, 00:24:34.564 "dhchap_digests": [ 00:24:34.564 "sha256", 00:24:34.564 "sha384", 00:24:34.564 "sha512" 00:24:34.564 ], 00:24:34.564 "dhchap_dhgroups": [ 00:24:34.564 "null", 00:24:34.564 "ffdhe2048", 00:24:34.564 "ffdhe3072", 00:24:34.564 "ffdhe4096", 00:24:34.564 "ffdhe6144", 00:24:34.564 "ffdhe8192" 00:24:34.564 ] 00:24:34.564 } 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "method": "bdev_nvme_attach_controller", 00:24:34.564 "params": { 00:24:34.564 "name": "nvme0", 00:24:34.564 "trtype": "TCP", 00:24:34.564 "adrfam": "IPv4", 00:24:34.564 "traddr": "10.0.0.2", 00:24:34.564 "trsvcid": "4420", 00:24:34.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.564 "prchk_reftag": false, 00:24:34.564 "prchk_guard": false, 00:24:34.564 "ctrlr_loss_timeout_sec": 0, 00:24:34.564 "reconnect_delay_sec": 0, 00:24:34.564 "fast_io_fail_timeout_sec": 0, 00:24:34.564 "psk": "key0", 00:24:34.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.564 "hdgst": false, 00:24:34.564 "ddgst": false, 00:24:34.564 "multipath": "multipath" 00:24:34.564 } 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "method": "bdev_nvme_set_hotplug", 00:24:34.564 "params": { 00:24:34.564 "period_us": 100000, 00:24:34.564 "enable": false 00:24:34.564 } 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "method": "bdev_enable_histogram", 00:24:34.564 "params": { 00:24:34.564 "name": "nvme0n1", 00:24:34.564 "enable": true 00:24:34.564 } 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "method": "bdev_wait_for_examine" 00:24:34.564 } 00:24:34.564 ] 00:24:34.564 }, 00:24:34.564 { 00:24:34.564 "subsystem": "nbd", 00:24:34.564 "config": [] 00:24:34.564 } 00:24:34.564 ] 00:24:34.564 }' 00:24:34.564 [2024-10-11 22:47:37.756031] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:34.564 [2024-10-11 22:47:37.756103] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280600 ] 00:24:34.564 [2024-10-11 22:47:37.814973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.823 [2024-10-11 22:47:37.861402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.823 [2024-10-11 22:47:38.030631] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:35.081 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:35.081 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:35.081 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:35.081 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:35.342 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.342 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:35.342 Running I/O for 1 seconds... 00:24:36.418 3125.00 IOPS, 12.21 MiB/s 00:24:36.418 Latency(us) 00:24:36.418 [2024-10-11T20:47:39.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.418 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:36.418 Verification LBA range: start 0x0 length 0x2000 00:24:36.418 nvme0n1 : 1.04 3136.43 12.25 0.00 0.00 40246.17 8883.77 36894.34 00:24:36.418 [2024-10-11T20:47:39.686Z] =================================================================================================================== 00:24:36.418 [2024-10-11T20:47:39.686Z] Total : 3136.43 12.25 0.00 0.00 40246.17 8883.77 36894.34 00:24:36.418 { 00:24:36.418 "results": [ 00:24:36.418 { 00:24:36.418 "job": "nvme0n1", 00:24:36.418 "core_mask": "0x2", 00:24:36.418 "workload": "verify", 00:24:36.418 "status": "finished", 00:24:36.418 "verify_range": { 00:24:36.418 "start": 0, 00:24:36.418 "length": 8192 00:24:36.418 }, 00:24:36.418 "queue_depth": 128, 00:24:36.418 "io_size": 4096, 00:24:36.418 "runtime": 1.037484, 00:24:36.418 "iops": 3136.4339112699568, 00:24:36.418 "mibps": 12.251694965898269, 00:24:36.418 "io_failed": 0, 00:24:36.418 "io_timeout": 0, 00:24:36.418 "avg_latency_us": 40246.17393339252, 00:24:36.418 "min_latency_us": 8883.76888888889, 00:24:36.418 "max_latency_us": 36894.34074074074 00:24:36.418 } 00:24:36.418 ], 00:24:36.418 "core_count": 1 00:24:36.418 } 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:36.418 nvmf_trace.0 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 280600 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 280600 ']' 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 280600 00:24:36.418 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 280600 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 280600' 00:24:36.709 killing process with pid 280600 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 280600 00:24:36.709 Received shutdown signal, test time was about 1.000000 seconds 00:24:36.709 00:24:36.709 Latency(us) 00:24:36.709 [2024-10-11T20:47:39.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.709 [2024-10-11T20:47:39.977Z] =================================================================================================================== 00:24:36.709 [2024-10-11T20:47:39.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 280600 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:36.709 rmmod nvme_tcp 00:24:36.709 rmmod nvme_fabrics 00:24:36.709 rmmod nvme_keyring 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 280446 ']' 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 280446 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 280446 ']' 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 280446 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:36.709 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 280446 00:24:37.004 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:37.004 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:37.004 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 280446' 00:24:37.004 killing process with pid 280446 00:24:37.004 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 280446 00:24:37.004 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 280446 00:24:37.004 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:37.004 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:37.004 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:37.004 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:37.004 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:24:37.004 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:37.004 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:24:37.004 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.004 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:37.004 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.004 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.004 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.973 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:38.973 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.d32tio7tS1 /tmp/tmp.klYEdRJfuf /tmp/tmp.n0H6QGcfsI 00:24:38.973 00:24:38.973 real 1m22.068s 00:24:38.973 user 2m17.831s 00:24:38.973 sys 0m24.651s 00:24:38.973 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:38.973 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.973 ************************************ 00:24:38.973 END TEST nvmf_tls 00:24:38.973 ************************************ 00:24:38.973 22:47:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:38.973 22:47:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:38.973 22:47:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:38.973 22:47:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:39.233 ************************************ 00:24:39.233 START TEST nvmf_fips 00:24:39.233 ************************************ 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:39.233 * Looking for test storage... 00:24:39.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.233 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:39.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.234 --rc genhtml_branch_coverage=1 00:24:39.234 --rc genhtml_function_coverage=1 00:24:39.234 --rc genhtml_legend=1 00:24:39.234 --rc geninfo_all_blocks=1 00:24:39.234 --rc geninfo_unexecuted_blocks=1 00:24:39.234 00:24:39.234 ' 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:39.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.234 --rc genhtml_branch_coverage=1 00:24:39.234 --rc genhtml_function_coverage=1 00:24:39.234 --rc genhtml_legend=1 00:24:39.234 --rc geninfo_all_blocks=1 00:24:39.234 --rc geninfo_unexecuted_blocks=1 00:24:39.234 00:24:39.234 ' 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:39.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.234 --rc genhtml_branch_coverage=1 00:24:39.234 --rc genhtml_function_coverage=1 00:24:39.234 --rc genhtml_legend=1 00:24:39.234 --rc geninfo_all_blocks=1 00:24:39.234 --rc geninfo_unexecuted_blocks=1 00:24:39.234 00:24:39.234 ' 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:39.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.234 --rc genhtml_branch_coverage=1 00:24:39.234 --rc genhtml_function_coverage=1 00:24:39.234 --rc genhtml_legend=1 00:24:39.234 --rc geninfo_all_blocks=1 00:24:39.234 --rc geninfo_unexecuted_blocks=1 00:24:39.234 00:24:39.234 ' 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:39.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.234 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:39.235 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:39.494 Error setting digest 00:24:39.494 4022457F217F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:39.494 4022457F217F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:39.494 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:42.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:42.031 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:42.031 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:42.031 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:42.031 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:42.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:24:42.032 00:24:42.032 --- 10.0.0.2 ping statistics --- 00:24:42.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.032 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:24:42.032 00:24:42.032 --- 10.0.0.1 ping statistics --- 00:24:42.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.032 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=282966 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 282966 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 282966 ']' 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.032 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.032 [2024-10-11 22:47:44.985844] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:42.032 [2024-10-11 22:47:44.985951] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.032 [2024-10-11 22:47:45.052006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.032 [2024-10-11 22:47:45.099853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.032 [2024-10-11 22:47:45.099909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.032 [2024-10-11 22:47:45.099931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.032 [2024-10-11 22:47:45.099943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.032 [2024-10-11 22:47:45.099953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.032 [2024-10-11 22:47:45.100633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.lvf 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.lvf 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.lvf 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.lvf 00:24:42.032 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:42.599 [2024-10-11 22:47:45.560774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.599 [2024-10-11 22:47:45.576745] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:42.599 [2024-10-11 22:47:45.576968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.599 malloc0 00:24:42.599 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:42.599 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=283004 00:24:42.599 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:42.599 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 283004 /var/tmp/bdevperf.sock 00:24:42.599 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 283004 ']' 00:24:42.599 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.599 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.599 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.599 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.599 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.599 [2024-10-11 22:47:45.711276] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:42.599 [2024-10-11 22:47:45.711369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283004 ] 00:24:42.599 [2024-10-11 22:47:45.775422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.599 [2024-10-11 22:47:45.824086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.856 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:42.857 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:42.857 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.lvf 00:24:43.114 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:43.372 [2024-10-11 22:47:46.448371] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:43.372 TLSTESTn1 00:24:43.372 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:43.631 Running I/O for 10 seconds... 00:24:45.503 3219.00 IOPS, 12.57 MiB/s [2024-10-11T20:47:49.707Z] 3347.50 IOPS, 13.08 MiB/s [2024-10-11T20:47:51.083Z] 3310.00 IOPS, 12.93 MiB/s [2024-10-11T20:47:52.019Z] 3326.75 IOPS, 13.00 MiB/s [2024-10-11T20:47:52.951Z] 3342.80 IOPS, 13.06 MiB/s [2024-10-11T20:47:53.886Z] 3349.67 IOPS, 13.08 MiB/s [2024-10-11T20:47:54.820Z] 3336.57 IOPS, 13.03 MiB/s [2024-10-11T20:47:55.755Z] 3336.25 IOPS, 13.03 MiB/s [2024-10-11T20:47:56.706Z] 3298.33 IOPS, 12.88 MiB/s [2024-10-11T20:47:56.964Z] 3300.80 IOPS, 12.89 MiB/s 00:24:53.696 Latency(us) 00:24:53.696 [2024-10-11T20:47:56.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.697 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:53.697 Verification LBA range: start 0x0 length 0x2000 00:24:53.697 TLSTESTn1 : 10.03 3302.23 12.90 0.00 0.00 38676.39 10291.58 49321.91 00:24:53.697 [2024-10-11T20:47:56.965Z] =================================================================================================================== 00:24:53.697 [2024-10-11T20:47:56.965Z] Total : 3302.23 12.90 0.00 0.00 38676.39 10291.58 49321.91 00:24:53.697 { 00:24:53.697 "results": [ 00:24:53.697 { 00:24:53.697 "job": "TLSTESTn1", 00:24:53.697 "core_mask": "0x4", 00:24:53.697 "workload": "verify", 00:24:53.697 "status": "finished", 00:24:53.697 "verify_range": { 00:24:53.697 "start": 0, 00:24:53.697 "length": 8192 00:24:53.697 }, 00:24:53.697 "queue_depth": 128, 00:24:53.697 "io_size": 4096, 00:24:53.697 "runtime": 10.033838, 00:24:53.697 "iops": 3302.2259279051545, 00:24:53.697 "mibps": 12.89932003087951, 00:24:53.697 "io_failed": 0, 00:24:53.697 "io_timeout": 0, 00:24:53.697 "avg_latency_us": 38676.39179538082, 00:24:53.697 "min_latency_us": 10291.579259259259, 00:24:53.697 "max_latency_us": 49321.90814814815 00:24:53.697 } 00:24:53.697 ], 00:24:53.697 "core_count": 1 00:24:53.697 } 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:53.697 nvmf_trace.0 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 283004 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 283004 ']' 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 283004 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 283004 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 283004' 00:24:53.697 killing process with pid 283004 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 283004 00:24:53.697 Received shutdown signal, test time was about 10.000000 seconds 00:24:53.697 00:24:53.697 Latency(us) 00:24:53.697 [2024-10-11T20:47:56.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.697 [2024-10-11T20:47:56.965Z] =================================================================================================================== 00:24:53.697 [2024-10-11T20:47:56.965Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:53.697 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 283004 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:53.955 rmmod nvme_tcp 00:24:53.955 rmmod nvme_fabrics 00:24:53.955 rmmod nvme_keyring 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 282966 ']' 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 282966 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 282966 ']' 00:24:53.955 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 282966 00:24:53.956 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:53.956 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:53.956 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 282966 00:24:53.956 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:53.956 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:53.956 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 282966' 00:24:53.956 killing process with pid 282966 00:24:53.956 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 282966 00:24:53.956 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 282966 00:24:54.216 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:54.216 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:54.216 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:54.216 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:54.216 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:24:54.216 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:54.216 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:24:54.216 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:54.216 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:54.216 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.216 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.216 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.lvf 00:24:56.755 00:24:56.755 real 0m17.153s 00:24:56.755 user 0m22.985s 00:24:56.755 sys 0m5.234s 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:56.755 ************************************ 00:24:56.755 END TEST nvmf_fips 00:24:56.755 ************************************ 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:56.755 ************************************ 00:24:56.755 START TEST nvmf_control_msg_list 00:24:56.755 ************************************ 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:56.755 * Looking for test storage... 00:24:56.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:56.755 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:56.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.756 --rc genhtml_branch_coverage=1 00:24:56.756 --rc genhtml_function_coverage=1 00:24:56.756 --rc genhtml_legend=1 00:24:56.756 --rc geninfo_all_blocks=1 00:24:56.756 --rc geninfo_unexecuted_blocks=1 00:24:56.756 00:24:56.756 ' 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:56.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.756 --rc genhtml_branch_coverage=1 00:24:56.756 --rc genhtml_function_coverage=1 00:24:56.756 --rc genhtml_legend=1 00:24:56.756 --rc geninfo_all_blocks=1 00:24:56.756 --rc geninfo_unexecuted_blocks=1 00:24:56.756 00:24:56.756 ' 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:56.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.756 --rc genhtml_branch_coverage=1 00:24:56.756 --rc genhtml_function_coverage=1 00:24:56.756 --rc genhtml_legend=1 00:24:56.756 --rc geninfo_all_blocks=1 00:24:56.756 --rc geninfo_unexecuted_blocks=1 00:24:56.756 00:24:56.756 ' 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:56.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.756 --rc genhtml_branch_coverage=1 00:24:56.756 --rc genhtml_function_coverage=1 00:24:56.756 --rc genhtml_legend=1 00:24:56.756 --rc geninfo_all_blocks=1 00:24:56.756 --rc geninfo_unexecuted_blocks=1 00:24:56.756 00:24:56.756 ' 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:56.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:56.756 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.661 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:58.662 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:58.662 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:58.662 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:58.662 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:24:58.662 00:24:58.662 --- 10.0.0.2 ping statistics --- 00:24:58.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.662 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:24:58.662 00:24:58.662 --- 10.0.0.1 ping statistics --- 00:24:58.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.662 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:24:58.662 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=286374 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 286374 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 286374 ']' 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:58.663 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.663 [2024-10-11 22:48:01.826181] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:24:58.663 [2024-10-11 22:48:01.826264] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.663 [2024-10-11 22:48:01.895143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.921 [2024-10-11 22:48:01.941737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.922 [2024-10-11 22:48:01.941784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.922 [2024-10-11 22:48:01.941798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.922 [2024-10-11 22:48:01.941809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.922 [2024-10-11 22:48:01.941819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.922 [2024-10-11 22:48:01.942425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.922 [2024-10-11 22:48:02.096414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.922 Malloc0 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.922 [2024-10-11 22:48:02.136861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=286513 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=286514 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=286515 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 286513 00:24:58.922 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.181 [2024-10-11 22:48:02.195447] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:59.181 [2024-10-11 22:48:02.205517] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:59.181 [2024-10-11 22:48:02.205765] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:00.115 Initializing NVMe Controllers 00:25:00.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:00.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:00.115 Initialization complete. Launching workers. 00:25:00.115 ======================================================== 00:25:00.115 Latency(us) 00:25:00.115 Device Information : IOPS MiB/s Average min max 00:25:00.115 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3876.00 15.14 257.59 156.66 600.76 00:25:00.115 ======================================================== 00:25:00.115 Total : 3876.00 15.14 257.59 156.66 600.76 00:25:00.115 00:25:00.115 Initializing NVMe Controllers 00:25:00.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:00.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:00.115 Initialization complete. Launching workers. 00:25:00.115 ======================================================== 00:25:00.115 Latency(us) 00:25:00.115 Device Information : IOPS MiB/s Average min max 00:25:00.115 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40892.31 40727.28 40966.60 00:25:00.115 ======================================================== 00:25:00.115 Total : 25.00 0.10 40892.31 40727.28 40966.60 00:25:00.115 00:25:00.115 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 286514 00:25:00.373 Initializing NVMe Controllers 00:25:00.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:00.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:00.373 Initialization complete. Launching workers. 00:25:00.373 ======================================================== 00:25:00.373 Latency(us) 00:25:00.373 Device Information : IOPS MiB/s Average min max 00:25:00.373 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3840.00 15.00 259.92 198.42 494.74 00:25:00.373 ======================================================== 00:25:00.373 Total : 3840.00 15.00 259.92 198.42 494.74 00:25:00.373 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 286515 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.373 rmmod nvme_tcp 00:25:00.373 rmmod nvme_fabrics 00:25:00.373 rmmod nvme_keyring 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 286374 ']' 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 286374 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 286374 ']' 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 286374 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 286374 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:00.373 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 286374' 00:25:00.373 killing process with pid 286374 00:25:00.374 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 286374 00:25:00.374 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 286374 00:25:00.634 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:00.634 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:00.634 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:00.634 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:00.634 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:25:00.634 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:00.634 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:25:00.634 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.634 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.634 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.634 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.634 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.543 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:02.543 00:25:02.543 real 0m6.285s 00:25:02.543 user 0m5.520s 00:25:02.543 sys 0m2.637s 00:25:02.543 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:02.543 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:02.543 ************************************ 00:25:02.543 END TEST nvmf_control_msg_list 00:25:02.543 ************************************ 00:25:02.543 22:48:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:02.543 22:48:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:02.543 22:48:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:02.543 22:48:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:02.543 ************************************ 00:25:02.543 START TEST nvmf_wait_for_buf 00:25:02.543 ************************************ 00:25:02.543 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:02.802 * Looking for test storage... 00:25:02.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.802 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:02.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.803 --rc genhtml_branch_coverage=1 00:25:02.803 --rc genhtml_function_coverage=1 00:25:02.803 --rc genhtml_legend=1 00:25:02.803 --rc geninfo_all_blocks=1 00:25:02.803 --rc geninfo_unexecuted_blocks=1 00:25:02.803 00:25:02.803 ' 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:02.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.803 --rc genhtml_branch_coverage=1 00:25:02.803 --rc genhtml_function_coverage=1 00:25:02.803 --rc genhtml_legend=1 00:25:02.803 --rc geninfo_all_blocks=1 00:25:02.803 --rc geninfo_unexecuted_blocks=1 00:25:02.803 00:25:02.803 ' 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:02.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.803 --rc genhtml_branch_coverage=1 00:25:02.803 --rc genhtml_function_coverage=1 00:25:02.803 --rc genhtml_legend=1 00:25:02.803 --rc geninfo_all_blocks=1 00:25:02.803 --rc geninfo_unexecuted_blocks=1 00:25:02.803 00:25:02.803 ' 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:02.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.803 --rc genhtml_branch_coverage=1 00:25:02.803 --rc genhtml_function_coverage=1 00:25:02.803 --rc genhtml_legend=1 00:25:02.803 --rc geninfo_all_blocks=1 00:25:02.803 --rc geninfo_unexecuted_blocks=1 00:25:02.803 00:25:02.803 ' 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:02.803 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.345 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.346 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.346 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.346 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.346 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:25:05.346 00:25:05.346 --- 10.0.0.2 ping statistics --- 00:25:05.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.346 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:25:05.346 00:25:05.346 --- 10.0.0.1 ping statistics --- 00:25:05.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.346 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=288958 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 288958 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 288958 ']' 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.346 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.347 [2024-10-11 22:48:08.284764] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:25:05.347 [2024-10-11 22:48:08.284860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.347 [2024-10-11 22:48:08.350795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.347 [2024-10-11 22:48:08.396810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.347 [2024-10-11 22:48:08.396861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.347 [2024-10-11 22:48:08.396875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.347 [2024-10-11 22:48:08.396886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.347 [2024-10-11 22:48:08.396896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.347 [2024-10-11 22:48:08.397468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.347 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.606 Malloc0 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.606 [2024-10-11 22:48:08.642218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.606 [2024-10-11 22:48:08.666357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.606 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:05.606 [2024-10-11 22:48:08.734693] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:06.981 Initializing NVMe Controllers 00:25:06.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:06.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:06.981 Initialization complete. Launching workers. 00:25:06.981 ======================================================== 00:25:06.981 Latency(us) 00:25:06.981 Device Information : IOPS MiB/s Average min max 00:25:06.981 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 93.67 11.71 44231.14 7985.43 150563.34 00:25:06.981 ======================================================== 00:25:06.981 Total : 93.67 11.71 44231.14 7985.43 150563.34 00:25:06.981 00:25:07.240 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:07.240 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:07.240 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.240 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1478 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1478 -eq 0 ]] 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:07.241 rmmod nvme_tcp 00:25:07.241 rmmod nvme_fabrics 00:25:07.241 rmmod nvme_keyring 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 288958 ']' 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 288958 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 288958 ']' 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 288958 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 288958 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 288958' 00:25:07.241 killing process with pid 288958 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 288958 00:25:07.241 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 288958 00:25:07.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:07.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:07.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:07.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:07.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:25:07.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:07.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:25:07.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:07.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:07.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.408 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:09.408 00:25:09.408 real 0m6.847s 00:25:09.408 user 0m3.266s 00:25:09.408 sys 0m2.055s 00:25:09.408 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:09.408 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:09.408 ************************************ 00:25:09.408 END TEST nvmf_wait_for_buf 00:25:09.408 ************************************ 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:09.667 ************************************ 00:25:09.667 START TEST nvmf_fuzz 00:25:09.667 ************************************ 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:09.667 * Looking for test storage... 00:25:09.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:09.667 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:09.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.668 --rc genhtml_branch_coverage=1 00:25:09.668 --rc genhtml_function_coverage=1 00:25:09.668 --rc genhtml_legend=1 00:25:09.668 --rc geninfo_all_blocks=1 00:25:09.668 --rc geninfo_unexecuted_blocks=1 00:25:09.668 00:25:09.668 ' 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:09.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.668 --rc genhtml_branch_coverage=1 00:25:09.668 --rc genhtml_function_coverage=1 00:25:09.668 --rc genhtml_legend=1 00:25:09.668 --rc geninfo_all_blocks=1 00:25:09.668 --rc geninfo_unexecuted_blocks=1 00:25:09.668 00:25:09.668 ' 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:09.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.668 --rc genhtml_branch_coverage=1 00:25:09.668 --rc genhtml_function_coverage=1 00:25:09.668 --rc genhtml_legend=1 00:25:09.668 --rc geninfo_all_blocks=1 00:25:09.668 --rc geninfo_unexecuted_blocks=1 00:25:09.668 00:25:09.668 ' 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:09.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.668 --rc genhtml_branch_coverage=1 00:25:09.668 --rc genhtml_function_coverage=1 00:25:09.668 --rc genhtml_legend=1 00:25:09.668 --rc geninfo_all_blocks=1 00:25:09.668 --rc geninfo_unexecuted_blocks=1 00:25:09.668 00:25:09.668 ' 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:09.668 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:12.204 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:12.204 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:12.204 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:12.204 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # is_hw=yes 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:12.204 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.205 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.205 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:12.205 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:12.205 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.205 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:12.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:25:12.205 00:25:12.205 --- 10.0.0.2 ping statistics --- 00:25:12.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.205 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:25:12.205 00:25:12.205 --- 10.0.0.1 ping statistics --- 00:25:12.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.205 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # return 0 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=291316 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 291316 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 291316 ']' 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.205 Malloc0 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:12.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:44.527 Fuzzing completed. Shutting down the fuzz application 00:25:44.527 00:25:44.527 Dumping successful admin opcodes: 00:25:44.527 8, 9, 10, 24, 00:25:44.527 Dumping successful io opcodes: 00:25:44.527 0, 9, 00:25:44.527 NS: 0x2000008eff00 I/O qp, Total commands completed: 503186, total successful commands: 2900, random_seed: 3273487040 00:25:44.527 NS: 0x2000008eff00 admin qp, Total commands completed: 60336, total successful commands: 478, random_seed: 3196255680 00:25:44.527 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:44.527 Fuzzing completed. Shutting down the fuzz application 00:25:44.527 00:25:44.527 Dumping successful admin opcodes: 00:25:44.527 24, 00:25:44.527 Dumping successful io opcodes: 00:25:44.527 00:25:44.527 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 37625033 00:25:44.527 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 37737488 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.527 rmmod nvme_tcp 00:25:44.527 rmmod nvme_fabrics 00:25:44.527 rmmod nvme_keyring 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@515 -- # '[' -n 291316 ']' 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # killprocess 291316 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 291316 ']' 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 291316 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 291316 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 291316' 00:25:44.527 killing process with pid 291316 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 291316 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 291316 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-save 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-restore 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.527 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.435 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:46.436 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:46.436 00:25:46.436 real 0m36.861s 00:25:46.436 user 0m50.904s 00:25:46.436 sys 0m14.695s 00:25:46.436 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:46.436 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:46.436 ************************************ 00:25:46.436 END TEST nvmf_fuzz 00:25:46.436 ************************************ 00:25:46.436 22:48:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:46.436 22:48:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:46.436 22:48:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:46.436 22:48:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:46.436 ************************************ 00:25:46.436 START TEST nvmf_multiconnection 00:25:46.436 ************************************ 00:25:46.436 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:46.436 * Looking for test storage... 00:25:46.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:46.436 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:46.436 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:25:46.436 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:46.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.695 --rc genhtml_branch_coverage=1 00:25:46.695 --rc genhtml_function_coverage=1 00:25:46.695 --rc genhtml_legend=1 00:25:46.695 --rc geninfo_all_blocks=1 00:25:46.695 --rc geninfo_unexecuted_blocks=1 00:25:46.695 00:25:46.695 ' 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:46.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.695 --rc genhtml_branch_coverage=1 00:25:46.695 --rc genhtml_function_coverage=1 00:25:46.695 --rc genhtml_legend=1 00:25:46.695 --rc geninfo_all_blocks=1 00:25:46.695 --rc geninfo_unexecuted_blocks=1 00:25:46.695 00:25:46.695 ' 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:46.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.695 --rc genhtml_branch_coverage=1 00:25:46.695 --rc genhtml_function_coverage=1 00:25:46.695 --rc genhtml_legend=1 00:25:46.695 --rc geninfo_all_blocks=1 00:25:46.695 --rc geninfo_unexecuted_blocks=1 00:25:46.695 00:25:46.695 ' 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:46.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.695 --rc genhtml_branch_coverage=1 00:25:46.695 --rc genhtml_function_coverage=1 00:25:46.695 --rc genhtml_legend=1 00:25:46.695 --rc geninfo_all_blocks=1 00:25:46.695 --rc geninfo_unexecuted_blocks=1 00:25:46.695 00:25:46.695 ' 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:46.695 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:46.696 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:48.600 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:48.600 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:48.600 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:48.600 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # is_hw=yes 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.600 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:48.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:25:48.859 00:25:48.859 --- 10.0.0.2 ping statistics --- 00:25:48.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.859 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:25:48.859 00:25:48.859 --- 10.0.0.1 ping statistics --- 00:25:48.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.859 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # return 0 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # nvmfpid=296922 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # waitforlisten 296922 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 296922 ']' 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:48.859 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.859 [2024-10-11 22:48:52.045072] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:25:48.859 [2024-10-11 22:48:52.045183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.859 [2024-10-11 22:48:52.109834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.118 [2024-10-11 22:48:52.157403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.118 [2024-10-11 22:48:52.157457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.118 [2024-10-11 22:48:52.157480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.118 [2024-10-11 22:48:52.157490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.118 [2024-10-11 22:48:52.157499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.118 [2024-10-11 22:48:52.159129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.118 [2024-10-11 22:48:52.159190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.118 [2024-10-11 22:48:52.159259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.118 [2024-10-11 22:48:52.159261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.118 [2024-10-11 22:48:52.300380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.118 Malloc1 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.118 [2024-10-11 22:48:52.374464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.118 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 Malloc2 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 Malloc3 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 Malloc4 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 Malloc5 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.378 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.379 Malloc6 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.379 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 Malloc7 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 Malloc8 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 Malloc9 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 Malloc10 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 Malloc11 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.638 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:50.571 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:50.571 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:50.571 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:50.571 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:50.571 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:52.469 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:52.469 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:52.469 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:52.469 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:52.469 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:52.469 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:52.469 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.469 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:53.034 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:53.034 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:53.034 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:53.034 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:53.034 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:54.932 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:54.932 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:54.932 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:55.189 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:55.189 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:55.189 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:55.189 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.189 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:55.754 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:55.754 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:55.754 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.754 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:55.754 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:58.281 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:58.281 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:58.281 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:58.281 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:58.281 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:58.281 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:58.281 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.281 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:58.539 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:58.539 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:58.539 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.539 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:58.539 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:00.437 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:00.437 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:00.438 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:00.438 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:00.438 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.438 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:00.438 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.438 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:01.387 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:01.387 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:01.387 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.387 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:01.387 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:03.291 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:03.292 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:03.292 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:03.292 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:03.292 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.292 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:03.292 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.292 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:04.226 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:04.226 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:04.226 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.226 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:04.226 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:06.126 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:06.126 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:06.126 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:06.126 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:06.126 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:06.126 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:06.126 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.126 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:06.692 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:06.692 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:06.692 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:06.692 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:06.692 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:09.221 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:09.221 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:09.221 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:09.221 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:09.221 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.221 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:09.221 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.221 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:09.479 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:09.479 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:09.479 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.479 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:09.479 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:12.009 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:12.009 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:12.009 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:12.009 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:12.009 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:12.009 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:12.009 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.009 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:12.575 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:12.575 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:12.575 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:12.575 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:12.575 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:14.481 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:14.482 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:14.482 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:14.482 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:14.482 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:14.482 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:14.482 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.482 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:15.414 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:15.414 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:15.414 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.414 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:15.414 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:17.314 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:17.314 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:17.314 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:17.314 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:17.314 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.314 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:17.314 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.314 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:18.248 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:18.248 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:18.248 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.248 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:18.248 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:20.147 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:20.147 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:20.147 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:20.147 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:20.147 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.147 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:20.147 22:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:20.147 [global] 00:26:20.147 thread=1 00:26:20.147 invalidate=1 00:26:20.147 rw=read 00:26:20.147 time_based=1 00:26:20.147 runtime=10 00:26:20.147 ioengine=libaio 00:26:20.147 direct=1 00:26:20.147 bs=262144 00:26:20.147 iodepth=64 00:26:20.147 norandommap=1 00:26:20.147 numjobs=1 00:26:20.147 00:26:20.147 [job0] 00:26:20.147 filename=/dev/nvme0n1 00:26:20.147 [job1] 00:26:20.147 filename=/dev/nvme10n1 00:26:20.147 [job2] 00:26:20.147 filename=/dev/nvme1n1 00:26:20.147 [job3] 00:26:20.147 filename=/dev/nvme2n1 00:26:20.147 [job4] 00:26:20.147 filename=/dev/nvme3n1 00:26:20.147 [job5] 00:26:20.147 filename=/dev/nvme4n1 00:26:20.147 [job6] 00:26:20.147 filename=/dev/nvme5n1 00:26:20.147 [job7] 00:26:20.147 filename=/dev/nvme6n1 00:26:20.147 [job8] 00:26:20.147 filename=/dev/nvme7n1 00:26:20.147 [job9] 00:26:20.147 filename=/dev/nvme8n1 00:26:20.147 [job10] 00:26:20.147 filename=/dev/nvme9n1 00:26:20.147 Could not set queue depth (nvme0n1) 00:26:20.147 Could not set queue depth (nvme10n1) 00:26:20.147 Could not set queue depth (nvme1n1) 00:26:20.147 Could not set queue depth (nvme2n1) 00:26:20.147 Could not set queue depth (nvme3n1) 00:26:20.147 Could not set queue depth (nvme4n1) 00:26:20.147 Could not set queue depth (nvme5n1) 00:26:20.147 Could not set queue depth (nvme6n1) 00:26:20.147 Could not set queue depth (nvme7n1) 00:26:20.147 Could not set queue depth (nvme8n1) 00:26:20.147 Could not set queue depth (nvme9n1) 00:26:20.405 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.405 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.405 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.405 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.405 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.405 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.405 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.405 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.405 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.405 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.405 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.405 fio-3.35 00:26:20.405 Starting 11 threads 00:26:32.606 00:26:32.606 job0: (groupid=0, jobs=1): err= 0: pid=301158: Fri Oct 11 22:49:34 2024 00:26:32.606 read: IOPS=79, BW=20.0MiB/s (20.9MB/s)(205MiB/10282msec) 00:26:32.606 slat (usec): min=10, max=898386, avg=10620.24, stdev=61347.65 00:26:32.606 clat (msec): min=21, max=2147, avg=790.00, stdev=510.16 00:26:32.606 lat (msec): min=21, max=2241, avg=800.62, stdev=519.09 00:26:32.606 clat percentiles (msec): 00:26:32.606 | 1.00th=[ 29], 5.00th=[ 90], 10.00th=[ 167], 20.00th=[ 236], 00:26:32.606 | 30.00th=[ 313], 40.00th=[ 592], 50.00th=[ 818], 60.00th=[ 927], 00:26:32.606 | 70.00th=[ 1053], 80.00th=[ 1284], 90.00th=[ 1552], 95.00th=[ 1620], 00:26:32.606 | 99.00th=[ 1854], 99.50th=[ 1854], 99.90th=[ 2140], 99.95th=[ 2140], 00:26:32.606 | 99.99th=[ 2140] 00:26:32.606 bw ( KiB/s): min= 5632, max=58880, per=4.23%, avg=20426.11, stdev=14470.37, samples=19 00:26:32.606 iops : min= 22, max= 230, avg=79.79, stdev=56.52, samples=19 00:26:32.606 lat (msec) : 50=1.83%, 100=4.26%, 250=15.83%, 500=9.87%, 750=13.28% 00:26:32.606 lat (msec) : 1000=22.17%, 2000=32.52%, >=2000=0.24% 00:26:32.606 cpu : usr=0.04%, sys=0.36%, ctx=144, majf=0, minf=4097 00:26:32.606 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.3% 00:26:32.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.606 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.606 issued rwts: total=821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.606 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.606 job1: (groupid=0, jobs=1): err= 0: pid=301159: Fri Oct 11 22:49:34 2024 00:26:32.606 read: IOPS=185, BW=46.3MiB/s (48.5MB/s)(473MiB/10225msec) 00:26:32.606 slat (usec): min=12, max=323223, avg=5296.27, stdev=20451.19 00:26:32.606 clat (msec): min=32, max=1257, avg=340.07, stdev=270.64 00:26:32.606 lat (msec): min=32, max=1257, avg=345.36, stdev=274.71 00:26:32.606 clat percentiles (msec): 00:26:32.606 | 1.00th=[ 36], 5.00th=[ 82], 10.00th=[ 99], 20.00th=[ 115], 00:26:32.606 | 30.00th=[ 131], 40.00th=[ 150], 50.00th=[ 199], 60.00th=[ 317], 00:26:32.606 | 70.00th=[ 523], 80.00th=[ 600], 90.00th=[ 701], 95.00th=[ 835], 00:26:32.606 | 99.00th=[ 1167], 99.50th=[ 1183], 99.90th=[ 1250], 99.95th=[ 1250], 00:26:32.606 | 99.99th=[ 1250] 00:26:32.606 bw ( KiB/s): min= 7168, max=145920, per=9.69%, avg=46822.40, stdev=41187.23, samples=20 00:26:32.606 iops : min= 28, max= 570, avg=182.90, stdev=160.89, samples=20 00:26:32.606 lat (msec) : 50=1.90%, 100=9.61%, 250=42.68%, 500=13.84%, 750=24.04% 00:26:32.606 lat (msec) : 1000=5.86%, 2000=2.06% 00:26:32.606 cpu : usr=0.17%, sys=0.65%, ctx=260, majf=0, minf=4097 00:26:32.606 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:32.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.606 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.606 issued rwts: total=1893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.606 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.606 job2: (groupid=0, jobs=1): err= 0: pid=301160: Fri Oct 11 22:49:34 2024 00:26:32.606 read: IOPS=385, BW=96.4MiB/s (101MB/s)(992MiB/10284msec) 00:26:32.606 slat (usec): min=12, max=711094, avg=2359.03, stdev=15822.69 00:26:32.606 clat (usec): min=1363, max=1363.7k, avg=163375.00, stdev=192854.83 00:26:32.606 lat (usec): min=1523, max=1404.1k, avg=165734.03, stdev=195464.82 00:26:32.606 clat percentiles (msec): 00:26:32.606 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 38], 20.00th=[ 43], 00:26:32.606 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 53], 60.00th=[ 122], 00:26:32.606 | 70.00th=[ 205], 80.00th=[ 268], 90.00th=[ 435], 95.00th=[ 531], 00:26:32.606 | 99.00th=[ 869], 99.50th=[ 1267], 99.90th=[ 1267], 99.95th=[ 1267], 00:26:32.606 | 99.99th=[ 1368] 00:26:32.606 bw ( KiB/s): min=18432, max=329216, per=20.68%, avg=99942.40, stdev=98263.09, samples=20 00:26:32.606 iops : min= 72, max= 1286, avg=390.40, stdev=383.84, samples=20 00:26:32.606 lat (msec) : 2=0.05%, 4=0.05%, 10=0.88%, 20=3.30%, 50=37.66% 00:26:32.606 lat (msec) : 100=16.84%, 250=19.89%, 500=15.15%, 750=4.26%, 1000=1.24% 00:26:32.606 lat (msec) : 2000=0.68% 00:26:32.606 cpu : usr=0.28%, sys=1.26%, ctx=834, majf=0, minf=4098 00:26:32.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:32.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.606 issued rwts: total=3967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.606 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.606 job3: (groupid=0, jobs=1): err= 0: pid=301161: Fri Oct 11 22:49:34 2024 00:26:32.606 read: IOPS=98, BW=24.6MiB/s (25.8MB/s)(251MiB/10224msec) 00:26:32.606 slat (usec): min=8, max=427143, avg=8560.99, stdev=39066.73 00:26:32.606 clat (usec): min=1694, max=1814.9k, avg=641993.94, stdev=437651.68 00:26:32.606 lat (usec): min=1725, max=1815.0k, avg=650554.93, stdev=444277.20 00:26:32.606 clat percentiles (msec): 00:26:32.606 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 18], 20.00th=[ 230], 00:26:32.606 | 30.00th=[ 426], 40.00th=[ 502], 50.00th=[ 575], 60.00th=[ 693], 00:26:32.606 | 70.00th=[ 818], 80.00th=[ 1083], 90.00th=[ 1301], 95.00th=[ 1418], 00:26:32.606 | 99.00th=[ 1620], 99.50th=[ 1653], 99.90th=[ 1687], 99.95th=[ 1821], 00:26:32.606 | 99.99th=[ 1821] 00:26:32.606 bw ( KiB/s): min= 6144, max=111104, per=4.98%, avg=24089.60, stdev=22792.54, samples=20 00:26:32.606 iops : min= 24, max= 434, avg=94.10, stdev=89.03, samples=20 00:26:32.606 lat (msec) : 2=0.10%, 4=4.38%, 10=1.69%, 20=7.76%, 50=0.30% 00:26:32.606 lat (msec) : 100=0.40%, 250=6.27%, 500=18.61%, 750=27.36%, 1000=8.96% 00:26:32.606 lat (msec) : 2000=24.18% 00:26:32.606 cpu : usr=0.00%, sys=0.35%, ctx=244, majf=0, minf=4097 00:26:32.606 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:26:32.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.606 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.606 issued rwts: total=1005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.606 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.606 job4: (groupid=0, jobs=1): err= 0: pid=301162: Fri Oct 11 22:49:34 2024 00:26:32.606 read: IOPS=287, BW=72.0MiB/s (75.4MB/s)(736MiB/10229msec) 00:26:32.606 slat (usec): min=8, max=728455, avg=2246.24, stdev=19548.20 00:26:32.606 clat (usec): min=1699, max=1858.5k, avg=219911.49, stdev=312209.62 00:26:32.606 lat (usec): min=1880, max=2013.6k, avg=222157.73, stdev=315645.84 00:26:32.606 clat percentiles (msec): 00:26:32.606 | 1.00th=[ 5], 5.00th=[ 33], 10.00th=[ 41], 20.00th=[ 77], 00:26:32.606 | 30.00th=[ 84], 40.00th=[ 95], 50.00th=[ 103], 60.00th=[ 113], 00:26:32.606 | 70.00th=[ 142], 80.00th=[ 197], 90.00th=[ 760], 95.00th=[ 953], 00:26:32.606 | 99.00th=[ 1452], 99.50th=[ 1636], 99.90th=[ 1720], 99.95th=[ 1854], 00:26:32.606 | 99.99th=[ 1854] 00:26:32.606 bw ( KiB/s): min= 9216, max=198144, per=15.25%, avg=73728.00, stdev=67269.70, samples=20 00:26:32.606 iops : min= 36, max= 774, avg=288.00, stdev=262.77, samples=20 00:26:32.607 lat (msec) : 2=0.10%, 4=0.17%, 10=3.57%, 50=7.44%, 100=35.05% 00:26:32.607 lat (msec) : 250=38.49%, 500=2.65%, 750=2.41%, 1000=5.64%, 2000=4.48% 00:26:32.607 cpu : usr=0.10%, sys=0.89%, ctx=935, majf=0, minf=3721 00:26:32.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:32.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.607 issued rwts: total=2944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.607 job5: (groupid=0, jobs=1): err= 0: pid=301167: Fri Oct 11 22:49:34 2024 00:26:32.607 read: IOPS=102, BW=25.6MiB/s (26.9MB/s)(264MiB/10278msec) 00:26:32.607 slat (usec): min=8, max=1106.4k, avg=8692.76, stdev=47544.92 00:26:32.607 clat (msec): min=10, max=2311, avg=614.79, stdev=496.98 00:26:32.607 lat (msec): min=10, max=2536, avg=623.48, stdev=503.21 00:26:32.607 clat percentiles (msec): 00:26:32.607 | 1.00th=[ 16], 5.00th=[ 29], 10.00th=[ 41], 20.00th=[ 62], 00:26:32.607 | 30.00th=[ 255], 40.00th=[ 456], 50.00th=[ 567], 60.00th=[ 709], 00:26:32.607 | 70.00th=[ 852], 80.00th=[ 986], 90.00th=[ 1116], 95.00th=[ 1469], 00:26:32.607 | 99.00th=[ 2198], 99.50th=[ 2299], 99.90th=[ 2299], 99.95th=[ 2299], 00:26:32.607 | 99.99th=[ 2299] 00:26:32.607 bw ( KiB/s): min= 5632, max=140569, per=5.52%, avg=26692.68, stdev=29137.46, samples=19 00:26:32.607 iops : min= 22, max= 549, avg=104.26, stdev=113.80, samples=19 00:26:32.607 lat (msec) : 20=2.18%, 50=14.04%, 100=3.89%, 250=8.54%, 500=15.09% 00:26:32.607 lat (msec) : 750=20.02%, 1000=17.65%, 2000=15.75%, >=2000=2.85% 00:26:32.607 cpu : usr=0.10%, sys=0.34%, ctx=163, majf=0, minf=4097 00:26:32.607 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:26:32.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.607 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.607 issued rwts: total=1054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.607 job6: (groupid=0, jobs=1): err= 0: pid=301168: Fri Oct 11 22:49:34 2024 00:26:32.607 read: IOPS=214, BW=53.7MiB/s (56.3MB/s)(552MiB/10280msec) 00:26:32.607 slat (usec): min=8, max=310281, avg=2825.92, stdev=19864.02 00:26:32.607 clat (usec): min=1048, max=1900.1k, avg=294876.21, stdev=347658.51 00:26:32.607 lat (usec): min=1067, max=1900.1k, avg=297702.13, stdev=351164.04 00:26:32.607 clat percentiles (msec): 00:26:32.607 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 29], 00:26:32.607 | 30.00th=[ 81], 40.00th=[ 115], 50.00th=[ 153], 60.00th=[ 180], 00:26:32.607 | 70.00th=[ 243], 80.00th=[ 625], 90.00th=[ 869], 95.00th=[ 1011], 00:26:32.607 | 99.00th=[ 1401], 99.50th=[ 1737], 99.90th=[ 1770], 99.95th=[ 1905], 00:26:32.607 | 99.99th=[ 1905] 00:26:32.607 bw ( KiB/s): min= 9728, max=169984, per=11.36%, avg=54886.40, stdev=45481.05, samples=20 00:26:32.607 iops : min= 38, max= 664, avg=214.40, stdev=177.66, samples=20 00:26:32.607 lat (msec) : 2=0.18%, 4=0.27%, 10=3.80%, 20=13.09%, 50=5.80% 00:26:32.607 lat (msec) : 100=14.09%, 250=33.29%, 500=6.07%, 750=8.11%, 1000=9.96% 00:26:32.607 lat (msec) : 2000=5.34% 00:26:32.607 cpu : usr=0.15%, sys=0.78%, ctx=842, majf=0, minf=4098 00:26:32.607 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.1% 00:26:32.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.607 issued rwts: total=2208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.607 job7: (groupid=0, jobs=1): err= 0: pid=301169: Fri Oct 11 22:49:34 2024 00:26:32.607 read: IOPS=243, BW=60.8MiB/s (63.8MB/s)(622MiB/10226msec) 00:26:32.607 slat (usec): min=8, max=539523, avg=3510.40, stdev=21441.90 00:26:32.607 clat (usec): min=1855, max=1535.1k, avg=259327.64, stdev=270133.81 00:26:32.607 lat (usec): min=1933, max=1535.1k, avg=262838.04, stdev=273779.54 00:26:32.607 clat percentiles (msec): 00:26:32.607 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 19], 20.00th=[ 85], 00:26:32.607 | 30.00th=[ 117], 40.00th=[ 132], 50.00th=[ 144], 60.00th=[ 188], 00:26:32.607 | 70.00th=[ 284], 80.00th=[ 351], 90.00th=[ 693], 95.00th=[ 818], 00:26:32.607 | 99.00th=[ 1183], 99.50th=[ 1183], 99.90th=[ 1267], 99.95th=[ 1401], 00:26:32.607 | 99.99th=[ 1536] 00:26:32.607 bw ( KiB/s): min=13312, max=147456, per=12.84%, avg=62054.40, stdev=45020.34, samples=20 00:26:32.607 iops : min= 52, max= 576, avg=242.40, stdev=175.86, samples=20 00:26:32.607 lat (msec) : 2=0.08%, 4=0.12%, 10=3.38%, 20=8.52%, 50=2.09% 00:26:32.607 lat (msec) : 100=8.80%, 250=44.57%, 500=15.55%, 750=9.53%, 1000=3.82% 00:26:32.607 lat (msec) : 2000=3.54% 00:26:32.607 cpu : usr=0.13%, sys=0.82%, ctx=492, majf=0, minf=4097 00:26:32.607 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:32.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.607 issued rwts: total=2488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.607 job8: (groupid=0, jobs=1): err= 0: pid=301170: Fri Oct 11 22:49:34 2024 00:26:32.607 read: IOPS=77, BW=19.3MiB/s (20.2MB/s)(199MiB/10283msec) 00:26:32.607 slat (usec): min=8, max=728874, avg=11878.65, stdev=52256.45 00:26:32.607 clat (msec): min=44, max=1808, avg=816.21, stdev=361.37 00:26:32.607 lat (msec): min=45, max=1869, avg=828.09, stdev=368.45 00:26:32.607 clat percentiles (msec): 00:26:32.607 | 1.00th=[ 46], 5.00th=[ 222], 10.00th=[ 321], 20.00th=[ 558], 00:26:32.607 | 30.00th=[ 609], 40.00th=[ 718], 50.00th=[ 776], 60.00th=[ 852], 00:26:32.607 | 70.00th=[ 1036], 80.00th=[ 1133], 90.00th=[ 1284], 95.00th=[ 1519], 00:26:32.607 | 99.00th=[ 1620], 99.50th=[ 1620], 99.90th=[ 1804], 99.95th=[ 1804], 00:26:32.607 | 99.99th=[ 1804] 00:26:32.607 bw ( KiB/s): min= 4096, max=56320, per=3.87%, avg=18688.00, stdev=10978.04, samples=20 00:26:32.607 iops : min= 16, max= 220, avg=73.00, stdev=42.88, samples=20 00:26:32.607 lat (msec) : 50=1.89%, 250=4.91%, 500=8.06%, 750=29.97%, 1000=23.68% 00:26:32.607 lat (msec) : 2000=31.49% 00:26:32.607 cpu : usr=0.04%, sys=0.22%, ctx=109, majf=0, minf=4098 00:26:32.607 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:26:32.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.607 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.607 issued rwts: total=794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.607 job9: (groupid=0, jobs=1): err= 0: pid=301171: Fri Oct 11 22:49:34 2024 00:26:32.607 read: IOPS=63, BW=15.8MiB/s (16.5MB/s)(162MiB/10283msec) 00:26:32.607 slat (usec): min=12, max=729412, avg=15653.31, stdev=63790.33 00:26:32.607 clat (msec): min=135, max=2100, avg=999.16, stdev=336.13 00:26:32.607 lat (msec): min=313, max=2100, avg=1014.81, stdev=342.01 00:26:32.607 clat percentiles (msec): 00:26:32.607 | 1.00th=[ 313], 5.00th=[ 439], 10.00th=[ 502], 20.00th=[ 743], 00:26:32.607 | 30.00th=[ 793], 40.00th=[ 902], 50.00th=[ 986], 60.00th=[ 1053], 00:26:32.607 | 70.00th=[ 1167], 80.00th=[ 1368], 90.00th=[ 1502], 95.00th=[ 1536], 00:26:32.607 | 99.00th=[ 1787], 99.50th=[ 1787], 99.90th=[ 2106], 99.95th=[ 2106], 00:26:32.607 | 99.99th=[ 2106] 00:26:32.607 bw ( KiB/s): min= 3584, max=27136, per=3.26%, avg=15738.53, stdev=6439.20, samples=19 00:26:32.607 iops : min= 14, max= 106, avg=61.47, stdev=25.16, samples=19 00:26:32.607 lat (msec) : 250=0.15%, 500=7.25%, 750=15.74%, 1000=31.17%, 2000=45.52% 00:26:32.607 lat (msec) : >=2000=0.15% 00:26:32.607 cpu : usr=0.00%, sys=0.30%, ctx=63, majf=0, minf=4097 00:26:32.607 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:26:32.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.607 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:26:32.607 issued rwts: total=648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.607 job10: (groupid=0, jobs=1): err= 0: pid=301172: Fri Oct 11 22:49:34 2024 00:26:32.607 read: IOPS=155, BW=38.8MiB/s (40.6MB/s)(399MiB/10282msec) 00:26:32.607 slat (usec): min=9, max=630466, avg=5094.21, stdev=27290.16 00:26:32.607 clat (msec): min=10, max=1536, avg=407.36, stdev=312.00 00:26:32.607 lat (msec): min=10, max=2024, avg=412.45, stdev=317.88 00:26:32.607 clat percentiles (msec): 00:26:32.607 | 1.00th=[ 23], 5.00th=[ 40], 10.00th=[ 65], 20.00th=[ 106], 00:26:32.607 | 30.00th=[ 136], 40.00th=[ 284], 50.00th=[ 418], 60.00th=[ 481], 00:26:32.607 | 70.00th=[ 523], 80.00th=[ 634], 90.00th=[ 818], 95.00th=[ 1003], 00:26:32.607 | 99.00th=[ 1368], 99.50th=[ 1418], 99.90th=[ 1418], 99.95th=[ 1536], 00:26:32.607 | 99.99th=[ 1536] 00:26:32.607 bw ( KiB/s): min=13312, max=141824, per=8.10%, avg=39169.85, stdev=33208.26, samples=20 00:26:32.607 iops : min= 52, max= 554, avg=153.00, stdev=129.72, samples=20 00:26:32.607 lat (msec) : 20=0.94%, 50=5.96%, 100=12.80%, 250=19.13%, 500=25.28% 00:26:32.607 lat (msec) : 750=23.15%, 1000=7.72%, 2000=5.02% 00:26:32.607 cpu : usr=0.06%, sys=0.59%, ctx=224, majf=0, minf=4097 00:26:32.607 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:26:32.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.607 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.607 issued rwts: total=1594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.607 00:26:32.607 Run status group 0 (all jobs): 00:26:32.607 READ: bw=472MiB/s (495MB/s), 15.8MiB/s-96.4MiB/s (16.5MB/s-101MB/s), io=4854MiB (5090MB), run=10224-10284msec 00:26:32.607 00:26:32.607 Disk stats (read/write): 00:26:32.607 nvme0n1: ios=1590/0, merge=0/0, ticks=1227483/0, in_queue=1227483, util=97.15% 00:26:32.607 nvme10n1: ios=3733/0, merge=0/0, ticks=1257481/0, in_queue=1257481, util=97.39% 00:26:32.607 nvme1n1: ios=7815/0, merge=0/0, ticks=1237166/0, in_queue=1237166, util=97.70% 00:26:32.607 nvme2n1: ios=1927/0, merge=0/0, ticks=1242106/0, in_queue=1242106, util=97.87% 00:26:32.607 nvme3n1: ios=5795/0, merge=0/0, ticks=1234328/0, in_queue=1234328, util=97.96% 00:26:32.607 nvme4n1: ios=2033/0, merge=0/0, ticks=1242028/0, in_queue=1242028, util=98.31% 00:26:32.607 nvme5n1: ios=4350/0, merge=0/0, ticks=1247265/0, in_queue=1247265, util=98.49% 00:26:32.607 nvme6n1: ios=4901/0, merge=0/0, ticks=1255128/0, in_queue=1255128, util=98.60% 00:26:32.607 nvme7n1: ios=1484/0, merge=0/0, ticks=1222211/0, in_queue=1222211, util=99.00% 00:26:32.607 nvme8n1: ios=1183/0, merge=0/0, ticks=1224123/0, in_queue=1224123, util=99.14% 00:26:32.607 nvme9n1: ios=3105/0, merge=0/0, ticks=1244241/0, in_queue=1244241, util=99.27% 00:26:32.607 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:32.607 [global] 00:26:32.607 thread=1 00:26:32.607 invalidate=1 00:26:32.607 rw=randwrite 00:26:32.607 time_based=1 00:26:32.607 runtime=10 00:26:32.607 ioengine=libaio 00:26:32.607 direct=1 00:26:32.607 bs=262144 00:26:32.607 iodepth=64 00:26:32.607 norandommap=1 00:26:32.607 numjobs=1 00:26:32.607 00:26:32.607 [job0] 00:26:32.607 filename=/dev/nvme0n1 00:26:32.607 [job1] 00:26:32.607 filename=/dev/nvme10n1 00:26:32.607 [job2] 00:26:32.607 filename=/dev/nvme1n1 00:26:32.607 [job3] 00:26:32.608 filename=/dev/nvme2n1 00:26:32.608 [job4] 00:26:32.608 filename=/dev/nvme3n1 00:26:32.608 [job5] 00:26:32.608 filename=/dev/nvme4n1 00:26:32.608 [job6] 00:26:32.608 filename=/dev/nvme5n1 00:26:32.608 [job7] 00:26:32.608 filename=/dev/nvme6n1 00:26:32.608 [job8] 00:26:32.608 filename=/dev/nvme7n1 00:26:32.608 [job9] 00:26:32.608 filename=/dev/nvme8n1 00:26:32.608 [job10] 00:26:32.608 filename=/dev/nvme9n1 00:26:32.608 Could not set queue depth (nvme0n1) 00:26:32.608 Could not set queue depth (nvme10n1) 00:26:32.608 Could not set queue depth (nvme1n1) 00:26:32.608 Could not set queue depth (nvme2n1) 00:26:32.608 Could not set queue depth (nvme3n1) 00:26:32.608 Could not set queue depth (nvme4n1) 00:26:32.608 Could not set queue depth (nvme5n1) 00:26:32.608 Could not set queue depth (nvme6n1) 00:26:32.608 Could not set queue depth (nvme7n1) 00:26:32.608 Could not set queue depth (nvme8n1) 00:26:32.608 Could not set queue depth (nvme9n1) 00:26:32.608 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.608 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.608 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.608 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.608 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.608 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.608 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.608 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.608 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.608 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.608 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.608 fio-3.35 00:26:32.608 Starting 11 threads 00:26:42.577 00:26:42.577 job0: (groupid=0, jobs=1): err= 0: pid=301755: Fri Oct 11 22:49:45 2024 00:26:42.577 write: IOPS=288, BW=72.1MiB/s (75.6MB/s)(741MiB/10272msec); 0 zone resets 00:26:42.577 slat (usec): min=16, max=68797, avg=2295.51, stdev=8009.52 00:26:42.577 clat (usec): min=802, max=771311, avg=219342.62, stdev=200063.79 00:26:42.577 lat (usec): min=861, max=795057, avg=221638.12, stdev=202439.17 00:26:42.577 clat percentiles (msec): 00:26:42.577 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 18], 20.00th=[ 48], 00:26:42.577 | 30.00th=[ 51], 40.00th=[ 53], 50.00th=[ 161], 60.00th=[ 257], 00:26:42.577 | 70.00th=[ 347], 80.00th=[ 430], 90.00th=[ 510], 95.00th=[ 575], 00:26:42.577 | 99.00th=[ 726], 99.50th=[ 768], 99.90th=[ 768], 99.95th=[ 768], 00:26:42.577 | 99.99th=[ 768] 00:26:42.577 bw ( KiB/s): min=22528, max=303104, per=9.28%, avg=74237.30, stdev=71546.84, samples=20 00:26:42.577 iops : min= 88, max= 1184, avg=289.95, stdev=279.51, samples=20 00:26:42.577 lat (usec) : 1000=0.17% 00:26:42.577 lat (msec) : 2=0.51%, 4=2.13%, 10=3.81%, 20=4.18%, 50=19.30% 00:26:42.577 lat (msec) : 100=16.80%, 250=12.62%, 500=28.98%, 750=10.86%, 1000=0.64% 00:26:42.577 cpu : usr=0.97%, sys=1.01%, ctx=1705, majf=0, minf=1 00:26:42.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:42.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.577 issued rwts: total=0,2964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.577 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.577 job1: (groupid=0, jobs=1): err= 0: pid=301767: Fri Oct 11 22:49:45 2024 00:26:42.577 write: IOPS=276, BW=69.2MiB/s (72.5MB/s)(711MiB/10273msec); 0 zone resets 00:26:42.577 slat (usec): min=15, max=101370, avg=2179.44, stdev=6714.39 00:26:42.577 clat (usec): min=709, max=790901, avg=228987.25, stdev=139977.36 00:26:42.577 lat (usec): min=733, max=790975, avg=231166.69, stdev=141429.05 00:26:42.577 clat percentiles (usec): 00:26:42.577 | 1.00th=[ 1270], 5.00th=[ 12780], 10.00th=[ 33162], 20.00th=[107480], 00:26:42.577 | 30.00th=[143655], 40.00th=[181404], 50.00th=[217056], 60.00th=[265290], 00:26:42.577 | 70.00th=[308282], 80.00th=[350225], 90.00th=[400557], 95.00th=[450888], 00:26:42.577 | 99.00th=[557843], 99.50th=[616563], 99.90th=[784335], 99.95th=[792724], 00:26:42.577 | 99.99th=[792724] 00:26:42.577 bw ( KiB/s): min=28672, max=122123, per=8.90%, avg=71130.15, stdev=26214.41, samples=20 00:26:42.577 iops : min= 112, max= 477, avg=277.85, stdev=102.40, samples=20 00:26:42.577 lat (usec) : 750=0.18%, 1000=0.32% 00:26:42.577 lat (msec) : 2=1.20%, 4=0.18%, 10=1.65%, 20=3.62%, 50=4.79% 00:26:42.577 lat (msec) : 100=7.21%, 250=37.44%, 500=39.97%, 750=3.10%, 1000=0.35% 00:26:42.577 cpu : usr=0.75%, sys=0.97%, ctx=1801, majf=0, minf=1 00:26:42.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:42.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.577 issued rwts: total=0,2842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.577 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.577 job2: (groupid=0, jobs=1): err= 0: pid=301768: Fri Oct 11 22:49:45 2024 00:26:42.577 write: IOPS=269, BW=67.4MiB/s (70.7MB/s)(692MiB/10272msec); 0 zone resets 00:26:42.577 slat (usec): min=24, max=78605, avg=2800.10, stdev=7531.93 00:26:42.577 clat (usec): min=1032, max=853259, avg=234434.57, stdev=161184.71 00:26:42.577 lat (usec): min=1109, max=853308, avg=237234.68, stdev=162998.90 00:26:42.577 clat percentiles (msec): 00:26:42.577 | 1.00th=[ 29], 5.00th=[ 74], 10.00th=[ 82], 20.00th=[ 87], 00:26:42.577 | 30.00th=[ 106], 40.00th=[ 115], 50.00th=[ 176], 60.00th=[ 275], 00:26:42.577 | 70.00th=[ 334], 80.00th=[ 401], 90.00th=[ 460], 95.00th=[ 510], 00:26:42.577 | 99.00th=[ 634], 99.50th=[ 760], 99.90th=[ 827], 99.95th=[ 852], 00:26:42.577 | 99.99th=[ 852] 00:26:42.577 bw ( KiB/s): min=24576, max=185344, per=8.66%, avg=69244.55, stdev=43074.38, samples=20 00:26:42.577 iops : min= 96, max= 724, avg=270.45, stdev=168.29, samples=20 00:26:42.577 lat (msec) : 2=0.18%, 4=0.33%, 10=0.04%, 20=0.14%, 50=2.13% 00:26:42.577 lat (msec) : 100=24.56%, 250=30.66%, 500=36.40%, 750=5.06%, 1000=0.51% 00:26:42.577 cpu : usr=0.93%, sys=0.67%, ctx=1190, majf=0, minf=1 00:26:42.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:42.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.577 issued rwts: total=0,2769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.577 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.577 job3: (groupid=0, jobs=1): err= 0: pid=301769: Fri Oct 11 22:49:45 2024 00:26:42.577 write: IOPS=263, BW=66.0MiB/s (69.2MB/s)(673MiB/10191msec); 0 zone resets 00:26:42.577 slat (usec): min=20, max=126305, avg=1845.17, stdev=8059.96 00:26:42.577 clat (usec): min=739, max=852631, avg=240460.53, stdev=203496.30 00:26:42.577 lat (usec): min=769, max=862082, avg=242305.70, stdev=205690.82 00:26:42.577 clat percentiles (msec): 00:26:42.577 | 1.00th=[ 3], 5.00th=[ 19], 10.00th=[ 26], 20.00th=[ 48], 00:26:42.577 | 30.00th=[ 88], 40.00th=[ 138], 50.00th=[ 186], 60.00th=[ 234], 00:26:42.577 | 70.00th=[ 317], 80.00th=[ 456], 90.00th=[ 550], 95.00th=[ 634], 00:26:42.577 | 99.00th=[ 818], 99.50th=[ 835], 99.90th=[ 844], 99.95th=[ 844], 00:26:42.577 | 99.99th=[ 852] 00:26:42.577 bw ( KiB/s): min=19456, max=168448, per=8.41%, avg=67247.80, stdev=40491.45, samples=20 00:26:42.577 iops : min= 76, max= 658, avg=262.65, stdev=158.20, samples=20 00:26:42.577 lat (usec) : 750=0.07%, 1000=0.04% 00:26:42.577 lat (msec) : 2=0.52%, 4=0.78%, 10=0.74%, 20=5.09%, 50=13.46% 00:26:42.577 lat (msec) : 100=11.23%, 250=31.52%, 500=23.46%, 750=10.93%, 1000=2.16% 00:26:42.577 cpu : usr=0.68%, sys=0.92%, ctx=1973, majf=0, minf=1 00:26:42.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:42.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.577 issued rwts: total=0,2690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.577 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.577 job4: (groupid=0, jobs=1): err= 0: pid=301770: Fri Oct 11 22:49:45 2024 00:26:42.577 write: IOPS=375, BW=93.9MiB/s (98.5MB/s)(965MiB/10272msec); 0 zone resets 00:26:42.577 slat (usec): min=21, max=225032, avg=1913.82, stdev=6806.63 00:26:42.577 clat (usec): min=771, max=859405, avg=168252.60, stdev=132939.45 00:26:42.577 lat (usec): min=838, max=859468, avg=170166.41, stdev=134425.85 00:26:42.577 clat percentiles (msec): 00:26:42.577 | 1.00th=[ 26], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 56], 00:26:42.577 | 30.00th=[ 81], 40.00th=[ 89], 50.00th=[ 112], 60.00th=[ 176], 00:26:42.577 | 70.00th=[ 207], 80.00th=[ 279], 90.00th=[ 363], 95.00th=[ 414], 00:26:42.577 | 99.00th=[ 575], 99.50th=[ 693], 99.90th=[ 835], 99.95th=[ 860], 00:26:42.577 | 99.99th=[ 860] 00:26:42.577 bw ( KiB/s): min=25088, max=286720, per=12.16%, avg=97211.95, stdev=63633.41, samples=20 00:26:42.577 iops : min= 98, max= 1120, avg=379.70, stdev=248.57, samples=20 00:26:42.577 lat (usec) : 1000=0.10% 00:26:42.577 lat (msec) : 2=0.31%, 4=0.31%, 10=0.05%, 20=0.13%, 50=18.32% 00:26:42.577 lat (msec) : 100=27.02%, 250=29.64%, 500=22.07%, 750=1.68%, 1000=0.36% 00:26:42.577 cpu : usr=1.23%, sys=1.13%, ctx=1752, majf=0, minf=1 00:26:42.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:42.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.577 issued rwts: total=0,3860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.577 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.577 job5: (groupid=0, jobs=1): err= 0: pid=301771: Fri Oct 11 22:49:45 2024 00:26:42.577 write: IOPS=276, BW=69.1MiB/s (72.5MB/s)(706MiB/10208msec); 0 zone resets 00:26:42.577 slat (usec): min=16, max=196246, avg=2366.87, stdev=9099.52 00:26:42.577 clat (usec): min=635, max=773963, avg=228724.01, stdev=202136.24 00:26:42.577 lat (usec): min=662, max=810642, avg=231090.88, stdev=204065.62 00:26:42.577 clat percentiles (usec): 00:26:42.577 | 1.00th=[ 1254], 5.00th=[ 3621], 10.00th=[ 6587], 20.00th=[ 18744], 00:26:42.577 | 30.00th=[ 77071], 40.00th=[ 96994], 50.00th=[168821], 60.00th=[256902], 00:26:42.577 | 70.00th=[346031], 80.00th=[429917], 90.00th=[530580], 95.00th=[591397], 00:26:42.577 | 99.00th=[725615], 99.50th=[750781], 99.90th=[767558], 99.95th=[767558], 00:26:42.577 | 99.99th=[775947] 00:26:42.577 bw ( KiB/s): min=20480, max=164352, per=8.83%, avg=70640.20, stdev=49690.42, samples=20 00:26:42.577 iops : min= 80, max= 642, avg=275.90, stdev=194.03, samples=20 00:26:42.577 lat (usec) : 750=0.21%, 1000=0.57% 00:26:42.577 lat (msec) : 2=1.28%, 4=3.65%, 10=8.68%, 20=5.77%, 50=4.68% 00:26:42.577 lat (msec) : 100=15.83%, 250=18.46%, 500=28.91%, 750=11.58%, 1000=0.39% 00:26:42.577 cpu : usr=0.75%, sys=1.05%, ctx=1659, majf=0, minf=1 00:26:42.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:42.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.577 issued rwts: total=0,2823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.577 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.577 job6: (groupid=0, jobs=1): err= 0: pid=301772: Fri Oct 11 22:49:45 2024 00:26:42.577 write: IOPS=393, BW=98.3MiB/s (103MB/s)(993MiB/10099msec); 0 zone resets 00:26:42.577 slat (usec): min=16, max=79938, avg=2093.36, stdev=6146.66 00:26:42.577 clat (usec): min=1378, max=727547, avg=160432.27, stdev=137335.26 00:26:42.577 lat (usec): min=1415, max=734636, avg=162525.63, stdev=139192.60 00:26:42.577 clat percentiles (msec): 00:26:42.577 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 18], 20.00th=[ 39], 00:26:42.577 | 30.00th=[ 71], 40.00th=[ 97], 50.00th=[ 127], 60.00th=[ 171], 00:26:42.578 | 70.00th=[ 201], 80.00th=[ 257], 90.00th=[ 342], 95.00th=[ 464], 00:26:42.578 | 99.00th=[ 575], 99.50th=[ 667], 99.90th=[ 701], 99.95th=[ 718], 00:26:42.578 | 99.99th=[ 726] 00:26:42.578 bw ( KiB/s): min=28672, max=282112, per=12.51%, avg=100060.75, stdev=67214.99, samples=20 00:26:42.578 iops : min= 112, max= 1102, avg=390.85, stdev=262.56, samples=20 00:26:42.578 lat (msec) : 2=0.08%, 4=1.91%, 10=4.66%, 20=4.56%, 50=12.19% 00:26:42.578 lat (msec) : 100=18.00%, 250=37.84%, 500=16.84%, 750=3.93% 00:26:42.578 cpu : usr=1.13%, sys=1.48%, ctx=2045, majf=0, minf=1 00:26:42.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:42.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.578 issued rwts: total=0,3972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.578 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.578 job7: (groupid=0, jobs=1): err= 0: pid=301773: Fri Oct 11 22:49:45 2024 00:26:42.578 write: IOPS=240, BW=60.1MiB/s (63.0MB/s)(606MiB/10086msec); 0 zone resets 00:26:42.578 slat (usec): min=25, max=69510, avg=3773.59, stdev=9189.64 00:26:42.578 clat (msec): min=6, max=769, avg=262.28, stdev=175.14 00:26:42.578 lat (msec): min=6, max=769, avg=266.05, stdev=177.42 00:26:42.578 clat percentiles (msec): 00:26:42.578 | 1.00th=[ 18], 5.00th=[ 85], 10.00th=[ 107], 20.00th=[ 116], 00:26:42.578 | 30.00th=[ 138], 40.00th=[ 167], 50.00th=[ 197], 60.00th=[ 232], 00:26:42.578 | 70.00th=[ 330], 80.00th=[ 451], 90.00th=[ 531], 95.00th=[ 609], 00:26:42.578 | 99.00th=[ 735], 99.50th=[ 768], 99.90th=[ 768], 99.95th=[ 768], 00:26:42.578 | 99.99th=[ 768] 00:26:42.578 bw ( KiB/s): min=22528, max=143872, per=7.56%, avg=60462.75, stdev=34881.85, samples=20 00:26:42.578 iops : min= 88, max= 562, avg=236.15, stdev=136.27, samples=20 00:26:42.578 lat (msec) : 10=0.16%, 20=1.11%, 50=1.32%, 100=4.91%, 250=56.12% 00:26:42.578 lat (msec) : 500=23.05%, 750=12.58%, 1000=0.74% 00:26:42.578 cpu : usr=0.74%, sys=0.75%, ctx=807, majf=0, minf=1 00:26:42.578 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:42.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.578 issued rwts: total=0,2425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.578 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.578 job8: (groupid=0, jobs=1): err= 0: pid=301774: Fri Oct 11 22:49:45 2024 00:26:42.578 write: IOPS=245, BW=61.4MiB/s (64.4MB/s)(631MiB/10274msec); 0 zone resets 00:26:42.578 slat (usec): min=21, max=103621, avg=2949.27, stdev=8115.46 00:26:42.578 clat (usec): min=888, max=841306, avg=257365.36, stdev=152898.66 00:26:42.578 lat (usec): min=921, max=841366, avg=260314.63, stdev=154625.31 00:26:42.578 clat percentiles (usec): 00:26:42.578 | 1.00th=[ 1450], 5.00th=[ 21103], 10.00th=[103285], 20.00th=[135267], 00:26:42.578 | 30.00th=[166724], 40.00th=[193987], 50.00th=[217056], 60.00th=[270533], 00:26:42.578 | 70.00th=[320865], 80.00th=[379585], 90.00th=[476054], 95.00th=[566232], 00:26:42.578 | 99.00th=[658506], 99.50th=[742392], 99.90th=[809501], 99.95th=[843056], 00:26:42.578 | 99.99th=[843056] 00:26:42.578 bw ( KiB/s): min=24576, max=135680, per=7.88%, avg=62972.75, stdev=30436.02, samples=20 00:26:42.578 iops : min= 96, max= 530, avg=245.95, stdev=118.93, samples=20 00:26:42.578 lat (usec) : 1000=0.16% 00:26:42.578 lat (msec) : 2=1.58%, 4=0.52%, 10=1.35%, 20=1.23%, 50=2.50% 00:26:42.578 lat (msec) : 100=1.98%, 250=47.11%, 500=34.83%, 750=8.36%, 1000=0.40% 00:26:42.578 cpu : usr=0.86%, sys=0.78%, ctx=1248, majf=0, minf=1 00:26:42.578 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:42.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.578 issued rwts: total=0,2524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.578 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.578 job9: (groupid=0, jobs=1): err= 0: pid=301775: Fri Oct 11 22:49:45 2024 00:26:42.578 write: IOPS=302, BW=75.7MiB/s (79.3MB/s)(765MiB/10104msec); 0 zone resets 00:26:42.578 slat (usec): min=18, max=80955, avg=2129.66, stdev=6852.90 00:26:42.578 clat (usec): min=811, max=746208, avg=209225.76, stdev=178276.97 00:26:42.578 lat (usec): min=869, max=746253, avg=211355.42, stdev=179823.88 00:26:42.578 clat percentiles (usec): 00:26:42.578 | 1.00th=[ 1467], 5.00th=[ 5014], 10.00th=[ 8848], 20.00th=[ 20579], 00:26:42.578 | 30.00th=[ 55313], 40.00th=[124257], 50.00th=[183501], 60.00th=[227541], 00:26:42.578 | 70.00th=[304088], 80.00th=[379585], 90.00th=[476054], 95.00th=[530580], 00:26:42.578 | 99.00th=[641729], 99.50th=[666895], 99.90th=[742392], 99.95th=[742392], 00:26:42.578 | 99.99th=[742392] 00:26:42.578 bw ( KiB/s): min=28672, max=302592, per=9.59%, avg=76678.50, stdev=66531.21, samples=20 00:26:42.578 iops : min= 112, max= 1182, avg=299.50, stdev=259.89, samples=20 00:26:42.578 lat (usec) : 1000=0.39% 00:26:42.578 lat (msec) : 2=1.37%, 4=2.16%, 10=7.49%, 20=8.37%, 50=8.31% 00:26:42.578 lat (msec) : 100=8.60%, 250=25.64%, 500=30.94%, 750=6.74% 00:26:42.578 cpu : usr=0.82%, sys=1.16%, ctx=1898, majf=0, minf=1 00:26:42.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:26:42.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.578 issued rwts: total=0,3058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.578 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.578 job10: (groupid=0, jobs=1): err= 0: pid=301776: Fri Oct 11 22:49:45 2024 00:26:42.578 write: IOPS=210, BW=52.7MiB/s (55.2MB/s)(541MiB/10275msec); 0 zone resets 00:26:42.578 slat (usec): min=23, max=55003, avg=3632.11, stdev=9027.87 00:26:42.578 clat (msec): min=4, max=853, avg=299.90, stdev=157.11 00:26:42.578 lat (msec): min=4, max=854, avg=303.54, stdev=159.33 00:26:42.578 clat percentiles (msec): 00:26:42.578 | 1.00th=[ 19], 5.00th=[ 54], 10.00th=[ 101], 20.00th=[ 157], 00:26:42.578 | 30.00th=[ 205], 40.00th=[ 251], 50.00th=[ 279], 60.00th=[ 326], 00:26:42.578 | 70.00th=[ 401], 80.00th=[ 447], 90.00th=[ 514], 95.00th=[ 558], 00:26:42.578 | 99.00th=[ 684], 99.50th=[ 760], 99.90th=[ 827], 99.95th=[ 852], 00:26:42.578 | 99.99th=[ 852] 00:26:42.578 bw ( KiB/s): min=26624, max=97792, per=6.73%, avg=53785.60, stdev=23952.50, samples=20 00:26:42.578 iops : min= 104, max= 382, avg=210.10, stdev=93.56, samples=20 00:26:42.578 lat (msec) : 10=0.37%, 20=0.83%, 50=3.42%, 100=5.40%, 250=29.61% 00:26:42.578 lat (msec) : 500=49.01%, 750=10.72%, 1000=0.65% 00:26:42.578 cpu : usr=0.52%, sys=0.71%, ctx=1026, majf=0, minf=1 00:26:42.578 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:42.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.578 issued rwts: total=0,2165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.578 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.578 00:26:42.578 Run status group 0 (all jobs): 00:26:42.578 WRITE: bw=781MiB/s (819MB/s), 52.7MiB/s-98.3MiB/s (55.2MB/s-103MB/s), io=8023MiB (8413MB), run=10086-10275msec 00:26:42.578 00:26:42.578 Disk stats (read/write): 00:26:42.578 nvme0n1: ios=49/5854, merge=0/0, ticks=43/1234735, in_queue=1234778, util=97.26% 00:26:42.578 nvme10n1: ios=48/5609, merge=0/0, ticks=74/1239762, in_queue=1239836, util=97.76% 00:26:42.578 nvme1n1: ios=42/5464, merge=0/0, ticks=3545/1222778, in_queue=1226323, util=99.91% 00:26:42.578 nvme2n1: ios=39/5362, merge=0/0, ticks=2374/1247012, in_queue=1249386, util=99.93% 00:26:42.578 nvme3n1: ios=43/7647, merge=0/0, ticks=2161/1199603, in_queue=1201764, util=99.93% 00:26:42.578 nvme4n1: ios=44/5618, merge=0/0, ticks=3369/1227432, in_queue=1230801, util=99.91% 00:26:42.578 nvme5n1: ios=47/7768, merge=0/0, ticks=3274/1203063, in_queue=1206337, util=99.93% 00:26:42.578 nvme6n1: ios=0/4642, merge=0/0, ticks=0/1199019, in_queue=1199019, util=98.32% 00:26:42.578 nvme7n1: ios=38/4969, merge=0/0, ticks=986/1223443, in_queue=1224429, util=99.94% 00:26:42.578 nvme8n1: ios=0/5901, merge=0/0, ticks=0/1215942, in_queue=1215942, util=98.93% 00:26:42.578 nvme9n1: ios=0/4255, merge=0/0, ticks=0/1224793, in_queue=1224793, util=99.11% 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:42.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:42.578 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.578 22:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:42.836 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:42.836 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:42.836 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.836 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.836 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:42.836 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.836 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:42.836 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.836 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:42.836 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.836 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:43.094 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.094 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:43.353 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.353 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:43.611 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.611 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:43.869 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.869 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:43.869 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.869 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:43.870 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:43.870 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:43.870 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:43.870 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:43.870 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:43.870 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:43.870 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:43.870 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:43.870 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:43.870 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.870 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:44.127 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:44.127 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:44.127 rmmod nvme_tcp 00:26:44.127 rmmod nvme_fabrics 00:26:44.127 rmmod nvme_keyring 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@515 -- # '[' -n 296922 ']' 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # killprocess 296922 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 296922 ']' 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 296922 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:44.127 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 296922 00:26:44.385 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:44.385 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:44.385 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 296922' 00:26:44.385 killing process with pid 296922 00:26:44.385 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 296922 00:26:44.385 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 296922 00:26:44.953 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:44.953 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:44.953 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:44.953 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:44.953 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-save 00:26:44.953 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:44.953 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-restore 00:26:44.953 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:44.953 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:44.953 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.953 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.953 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.867 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:46.867 00:26:46.867 real 1m0.362s 00:26:46.867 user 3m34.898s 00:26:46.867 sys 0m13.922s 00:26:46.867 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:46.867 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.867 ************************************ 00:26:46.867 END TEST nvmf_multiconnection 00:26:46.867 ************************************ 00:26:46.867 22:49:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:46.867 22:49:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:46.867 22:49:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:46.867 22:49:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:46.867 ************************************ 00:26:46.867 START TEST nvmf_initiator_timeout 00:26:46.867 ************************************ 00:26:46.867 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:46.867 * Looking for test storage... 00:26:46.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:46.867 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:46.867 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:26:46.867 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:47.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.125 --rc genhtml_branch_coverage=1 00:26:47.125 --rc genhtml_function_coverage=1 00:26:47.125 --rc genhtml_legend=1 00:26:47.125 --rc geninfo_all_blocks=1 00:26:47.125 --rc geninfo_unexecuted_blocks=1 00:26:47.125 00:26:47.125 ' 00:26:47.125 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:47.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.126 --rc genhtml_branch_coverage=1 00:26:47.126 --rc genhtml_function_coverage=1 00:26:47.126 --rc genhtml_legend=1 00:26:47.126 --rc geninfo_all_blocks=1 00:26:47.126 --rc geninfo_unexecuted_blocks=1 00:26:47.126 00:26:47.126 ' 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:47.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.126 --rc genhtml_branch_coverage=1 00:26:47.126 --rc genhtml_function_coverage=1 00:26:47.126 --rc genhtml_legend=1 00:26:47.126 --rc geninfo_all_blocks=1 00:26:47.126 --rc geninfo_unexecuted_blocks=1 00:26:47.126 00:26:47.126 ' 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:47.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.126 --rc genhtml_branch_coverage=1 00:26:47.126 --rc genhtml_function_coverage=1 00:26:47.126 --rc genhtml_legend=1 00:26:47.126 --rc geninfo_all_blocks=1 00:26:47.126 --rc geninfo_unexecuted_blocks=1 00:26:47.126 00:26:47.126 ' 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:47.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:47.126 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:49.030 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.030 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:49.030 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:49.030 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:49.030 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:49.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:49.031 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:49.290 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:49.290 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:49.290 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # is_hw=yes 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:49.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:26:49.290 00:26:49.290 --- 10.0.0.2 ping statistics --- 00:26:49.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.290 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:26:49.290 00:26:49.290 --- 10.0.0.1 ping statistics --- 00:26:49.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.290 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # return 0 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:49.290 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:49.291 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # nvmfpid=304832 00:26:49.291 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:49.291 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # waitforlisten 304832 00:26:49.291 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 304832 ']' 00:26:49.291 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.291 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:49.291 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.291 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:49.291 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:49.291 [2024-10-11 22:49:52.498502] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:26:49.291 [2024-10-11 22:49:52.498596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.548 [2024-10-11 22:49:52.577493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:49.548 [2024-10-11 22:49:52.629084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.548 [2024-10-11 22:49:52.629157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.548 [2024-10-11 22:49:52.629182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.548 [2024-10-11 22:49:52.629213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.548 [2024-10-11 22:49:52.629231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.548 [2024-10-11 22:49:52.631362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.548 [2024-10-11 22:49:52.631429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:49.548 [2024-10-11 22:49:52.631499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.548 [2024-10-11 22:49:52.631491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:49.807 Malloc0 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:49.807 Delay0 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:49.807 [2024-10-11 22:49:52.938559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:49.807 [2024-10-11 22:49:52.966823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.807 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:50.372 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:50.372 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:50.372 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:50.372 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:50.372 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:52.898 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:52.898 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:52.898 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:52.898 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:52.898 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:52.898 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:52.898 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=305253 00:26:52.898 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:52.898 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:52.898 [global] 00:26:52.898 thread=1 00:26:52.898 invalidate=1 00:26:52.898 rw=write 00:26:52.898 time_based=1 00:26:52.898 runtime=60 00:26:52.898 ioengine=libaio 00:26:52.898 direct=1 00:26:52.898 bs=4096 00:26:52.898 iodepth=1 00:26:52.898 norandommap=0 00:26:52.898 numjobs=1 00:26:52.898 00:26:52.898 verify_dump=1 00:26:52.898 verify_backlog=512 00:26:52.898 verify_state_save=0 00:26:52.898 do_verify=1 00:26:52.898 verify=crc32c-intel 00:26:52.898 [job0] 00:26:52.898 filename=/dev/nvme0n1 00:26:52.898 Could not set queue depth (nvme0n1) 00:26:52.898 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:52.898 fio-3.35 00:26:52.898 Starting 1 thread 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.434 true 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.434 true 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.434 true 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.434 true 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.434 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.715 true 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.715 true 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.715 true 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.715 true 00:26:58.715 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.716 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:58.716 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 305253 00:27:54.925 00:27:54.925 job0: (groupid=0, jobs=1): err= 0: pid=305326: Fri Oct 11 22:50:55 2024 00:27:54.925 read: IOPS=179, BW=717KiB/s (734kB/s)(42.0MiB/60001msec) 00:27:54.925 slat (nsec): min=5399, max=66257, avg=13239.30, stdev=5605.29 00:27:54.925 clat (usec): min=210, max=40896k, avg=5308.76, stdev=394444.64 00:27:54.925 lat (usec): min=216, max=40896k, avg=5322.00, stdev=394444.71 00:27:54.925 clat percentiles (usec): 00:27:54.925 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 247], 00:27:54.925 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:27:54.925 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 330], 95.00th=[ 355], 00:27:54.925 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:54.925 | 99.99th=[44303] 00:27:54.925 write: IOPS=181, BW=725KiB/s (743kB/s)(42.5MiB/60001msec); 0 zone resets 00:27:54.925 slat (usec): min=7, max=41638, avg=23.53, stdev=483.44 00:27:54.925 clat (usec): min=166, max=3522, avg=223.39, stdev=63.38 00:27:54.925 lat (usec): min=174, max=41877, avg=246.91, stdev=488.19 00:27:54.925 clat percentiles (usec): 00:27:54.925 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:27:54.925 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 217], 00:27:54.925 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 281], 95.00th=[ 334], 00:27:54.925 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 725], 99.95th=[ 914], 00:27:54.925 | 99.99th=[ 1762] 00:27:54.925 bw ( KiB/s): min= 1392, max= 8192, per=100.00%, avg=5461.33, stdev=2343.01, samples=15 00:27:54.925 iops : min= 348, max= 2048, avg=1365.20, stdev=585.94, samples=15 00:27:54.925 lat (usec) : 250=56.10%, 500=42.09%, 750=0.25%, 1000=0.04% 00:27:54.925 lat (msec) : 2=0.02%, 4=0.01%, 50=1.49%, >=2000=0.01% 00:27:54.925 cpu : usr=0.41%, sys=0.73%, ctx=21634, majf=0, minf=1 00:27:54.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:54.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.926 issued rwts: total=10752,10878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:54.926 00:27:54.926 Run status group 0 (all jobs): 00:27:54.926 READ: bw=717KiB/s (734kB/s), 717KiB/s-717KiB/s (734kB/s-734kB/s), io=42.0MiB (44.0MB), run=60001-60001msec 00:27:54.926 WRITE: bw=725KiB/s (743kB/s), 725KiB/s-725KiB/s (743kB/s-743kB/s), io=42.5MiB (44.6MB), run=60001-60001msec 00:27:54.926 00:27:54.926 Disk stats (read/write): 00:27:54.926 nvme0n1: ios=10574/10752, merge=0/0, ticks=16318/2208, in_queue=18526, util=99.68% 00:27:54.926 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:54.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:54.926 nvmf hotplug test: fio successful as expected 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:54.926 rmmod nvme_tcp 00:27:54.926 rmmod nvme_fabrics 00:27:54.926 rmmod nvme_keyring 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@515 -- # '[' -n 304832 ']' 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # killprocess 304832 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 304832 ']' 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 304832 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 304832 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 304832' 00:27:54.926 killing process with pid 304832 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 304832 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 304832 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-save 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.926 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.495 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:55.495 00:27:55.495 real 1m8.449s 00:27:55.495 user 4m11.685s 00:27:55.495 sys 0m7.063s 00:27:55.495 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:55.495 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:55.495 ************************************ 00:27:55.495 END TEST nvmf_initiator_timeout 00:27:55.495 ************************************ 00:27:55.495 22:50:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:55.495 22:50:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:55.495 22:50:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:55.495 22:50:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:55.495 22:50:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:57.397 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.397 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:57.398 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:57.398 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:57.398 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:57.398 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:57.398 ************************************ 00:27:57.398 START TEST nvmf_perf_adq 00:27:57.398 ************************************ 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:57.398 * Looking for test storage... 00:27:57.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:27:57.398 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.657 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.657 --rc genhtml_branch_coverage=1 00:27:57.657 --rc genhtml_function_coverage=1 00:27:57.658 --rc genhtml_legend=1 00:27:57.658 --rc geninfo_all_blocks=1 00:27:57.658 --rc geninfo_unexecuted_blocks=1 00:27:57.658 00:27:57.658 ' 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:57.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.658 --rc genhtml_branch_coverage=1 00:27:57.658 --rc genhtml_function_coverage=1 00:27:57.658 --rc genhtml_legend=1 00:27:57.658 --rc geninfo_all_blocks=1 00:27:57.658 --rc geninfo_unexecuted_blocks=1 00:27:57.658 00:27:57.658 ' 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:57.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.658 --rc genhtml_branch_coverage=1 00:27:57.658 --rc genhtml_function_coverage=1 00:27:57.658 --rc genhtml_legend=1 00:27:57.658 --rc geninfo_all_blocks=1 00:27:57.658 --rc geninfo_unexecuted_blocks=1 00:27:57.658 00:27:57.658 ' 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:57.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.658 --rc genhtml_branch_coverage=1 00:27:57.658 --rc genhtml_function_coverage=1 00:27:57.658 --rc genhtml_legend=1 00:27:57.658 --rc geninfo_all_blocks=1 00:27:57.658 --rc geninfo_unexecuted_blocks=1 00:27:57.658 00:27:57.658 ' 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:57.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:57.658 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.562 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.562 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:59.562 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:59.562 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:59.562 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:59.562 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:59.562 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:59.562 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:59.562 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:59.562 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:59.563 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:59.563 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:59.563 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:59.563 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:59.563 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:00.501 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:03.788 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:09.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:09.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:09.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:09.065 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:09.066 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.066 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:09.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:28:09.066 00:28:09.066 --- 10.0.0.2 ping statistics --- 00:28:09.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.066 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:28:09.066 00:28:09.066 --- 10.0.0.1 ping statistics --- 00:28:09.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.066 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=317734 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 317734 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 317734 ']' 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:09.066 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.066 [2024-10-11 22:51:12.200740] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:28:09.066 [2024-10-11 22:51:12.200809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.066 [2024-10-11 22:51:12.269223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.066 [2024-10-11 22:51:12.315056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.066 [2024-10-11 22:51:12.315119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.066 [2024-10-11 22:51:12.315141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.066 [2024-10-11 22:51:12.315152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.066 [2024-10-11 22:51:12.315161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.066 [2024-10-11 22:51:12.316788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.066 [2024-10-11 22:51:12.316815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.066 [2024-10-11 22:51:12.316873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.066 [2024-10-11 22:51:12.316876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.324 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:09.324 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:09.324 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.325 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.583 [2024-10-11 22:51:12.599091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.583 Malloc1 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.583 [2024-10-11 22:51:12.660514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=317769 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:09.583 22:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:11.485 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:11.485 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.485 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.485 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.485 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:11.485 "tick_rate": 2700000000, 00:28:11.485 "poll_groups": [ 00:28:11.485 { 00:28:11.485 "name": "nvmf_tgt_poll_group_000", 00:28:11.485 "admin_qpairs": 1, 00:28:11.485 "io_qpairs": 1, 00:28:11.485 "current_admin_qpairs": 1, 00:28:11.485 "current_io_qpairs": 1, 00:28:11.485 "pending_bdev_io": 0, 00:28:11.485 "completed_nvme_io": 19901, 00:28:11.485 "transports": [ 00:28:11.485 { 00:28:11.485 "trtype": "TCP" 00:28:11.485 } 00:28:11.485 ] 00:28:11.485 }, 00:28:11.485 { 00:28:11.485 "name": "nvmf_tgt_poll_group_001", 00:28:11.485 "admin_qpairs": 0, 00:28:11.485 "io_qpairs": 1, 00:28:11.485 "current_admin_qpairs": 0, 00:28:11.485 "current_io_qpairs": 1, 00:28:11.485 "pending_bdev_io": 0, 00:28:11.485 "completed_nvme_io": 20003, 00:28:11.485 "transports": [ 00:28:11.485 { 00:28:11.485 "trtype": "TCP" 00:28:11.485 } 00:28:11.485 ] 00:28:11.485 }, 00:28:11.485 { 00:28:11.485 "name": "nvmf_tgt_poll_group_002", 00:28:11.485 "admin_qpairs": 0, 00:28:11.485 "io_qpairs": 1, 00:28:11.485 "current_admin_qpairs": 0, 00:28:11.485 "current_io_qpairs": 1, 00:28:11.485 "pending_bdev_io": 0, 00:28:11.485 "completed_nvme_io": 19524, 00:28:11.485 "transports": [ 00:28:11.485 { 00:28:11.485 "trtype": "TCP" 00:28:11.485 } 00:28:11.485 ] 00:28:11.485 }, 00:28:11.485 { 00:28:11.485 "name": "nvmf_tgt_poll_group_003", 00:28:11.485 "admin_qpairs": 0, 00:28:11.485 "io_qpairs": 1, 00:28:11.485 "current_admin_qpairs": 0, 00:28:11.485 "current_io_qpairs": 1, 00:28:11.485 "pending_bdev_io": 0, 00:28:11.485 "completed_nvme_io": 19533, 00:28:11.485 "transports": [ 00:28:11.485 { 00:28:11.485 "trtype": "TCP" 00:28:11.485 } 00:28:11.485 ] 00:28:11.485 } 00:28:11.485 ] 00:28:11.485 }' 00:28:11.485 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:11.485 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:11.485 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:11.485 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:11.485 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 317769 00:28:19.594 Initializing NVMe Controllers 00:28:19.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:19.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:19.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:19.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:19.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:19.594 Initialization complete. Launching workers. 00:28:19.594 ======================================================== 00:28:19.594 Latency(us) 00:28:19.594 Device Information : IOPS MiB/s Average min max 00:28:19.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10390.60 40.59 6158.99 2181.89 10122.94 00:28:19.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10626.10 41.51 6023.22 2368.73 10195.65 00:28:19.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10402.90 40.64 6153.70 2259.02 10594.66 00:28:19.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10251.00 40.04 6243.78 2538.54 10292.27 00:28:19.594 ======================================================== 00:28:19.594 Total : 41670.59 162.78 6143.91 2181.89 10594.66 00:28:19.594 00:28:19.594 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:19.594 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:19.594 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:19.594 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:19.594 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:19.594 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:19.594 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:19.594 rmmod nvme_tcp 00:28:19.594 rmmod nvme_fabrics 00:28:19.594 rmmod nvme_keyring 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 317734 ']' 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 317734 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 317734 ']' 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 317734 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 317734 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 317734' 00:28:19.853 killing process with pid 317734 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 317734 00:28:19.853 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 317734 00:28:20.112 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:20.112 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:20.112 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:20.112 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:20.112 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:28:20.112 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:20.112 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:28:20.112 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.112 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:20.113 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.113 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.113 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.015 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:22.015 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:22.015 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:22.015 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:22.951 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:25.653 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:30.981 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:30.981 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:30.981 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:30.981 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:30.981 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:30.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:28:30.982 00:28:30.982 --- 10.0.0.2 ping statistics --- 00:28:30.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.982 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:28:30.982 00:28:30.982 --- 10.0.0.1 ping statistics --- 00:28:30.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.982 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:30.982 net.core.busy_poll = 1 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:30.982 net.core.busy_read = 1 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=320518 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 320518 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 320518 ']' 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:30.982 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.982 [2024-10-11 22:51:33.766846] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:28:30.982 [2024-10-11 22:51:33.766945] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.982 [2024-10-11 22:51:33.830436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:30.982 [2024-10-11 22:51:33.878587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.982 [2024-10-11 22:51:33.878655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.982 [2024-10-11 22:51:33.878679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.982 [2024-10-11 22:51:33.878691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.982 [2024-10-11 22:51:33.878700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.982 [2024-10-11 22:51:33.880130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.982 [2024-10-11 22:51:33.880187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:30.982 [2024-10-11 22:51:33.880253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:30.982 [2024-10-11 22:51:33.880256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.982 [2024-10-11 22:51:34.158682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.982 Malloc1 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.982 [2024-10-11 22:51:34.229655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=320551 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:30.982 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:33.512 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:33.512 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.512 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.512 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.512 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:33.512 "tick_rate": 2700000000, 00:28:33.512 "poll_groups": [ 00:28:33.512 { 00:28:33.512 "name": "nvmf_tgt_poll_group_000", 00:28:33.512 "admin_qpairs": 1, 00:28:33.512 "io_qpairs": 2, 00:28:33.512 "current_admin_qpairs": 1, 00:28:33.512 "current_io_qpairs": 2, 00:28:33.512 "pending_bdev_io": 0, 00:28:33.512 "completed_nvme_io": 24719, 00:28:33.512 "transports": [ 00:28:33.512 { 00:28:33.512 "trtype": "TCP" 00:28:33.512 } 00:28:33.512 ] 00:28:33.512 }, 00:28:33.512 { 00:28:33.512 "name": "nvmf_tgt_poll_group_001", 00:28:33.512 "admin_qpairs": 0, 00:28:33.512 "io_qpairs": 2, 00:28:33.512 "current_admin_qpairs": 0, 00:28:33.512 "current_io_qpairs": 2, 00:28:33.512 "pending_bdev_io": 0, 00:28:33.512 "completed_nvme_io": 25055, 00:28:33.512 "transports": [ 00:28:33.512 { 00:28:33.512 "trtype": "TCP" 00:28:33.512 } 00:28:33.512 ] 00:28:33.512 }, 00:28:33.512 { 00:28:33.512 "name": "nvmf_tgt_poll_group_002", 00:28:33.512 "admin_qpairs": 0, 00:28:33.512 "io_qpairs": 0, 00:28:33.512 "current_admin_qpairs": 0, 00:28:33.512 "current_io_qpairs": 0, 00:28:33.512 "pending_bdev_io": 0, 00:28:33.512 "completed_nvme_io": 0, 00:28:33.512 "transports": [ 00:28:33.512 { 00:28:33.512 "trtype": "TCP" 00:28:33.512 } 00:28:33.512 ] 00:28:33.512 }, 00:28:33.512 { 00:28:33.512 "name": "nvmf_tgt_poll_group_003", 00:28:33.512 "admin_qpairs": 0, 00:28:33.512 "io_qpairs": 0, 00:28:33.512 "current_admin_qpairs": 0, 00:28:33.512 "current_io_qpairs": 0, 00:28:33.512 "pending_bdev_io": 0, 00:28:33.512 "completed_nvme_io": 0, 00:28:33.512 "transports": [ 00:28:33.512 { 00:28:33.512 "trtype": "TCP" 00:28:33.512 } 00:28:33.512 ] 00:28:33.512 } 00:28:33.512 ] 00:28:33.512 }' 00:28:33.512 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:33.512 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:33.512 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:33.512 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:33.512 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 320551 00:28:41.625 Initializing NVMe Controllers 00:28:41.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:41.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:41.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:41.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:41.625 Initialization complete. Launching workers. 00:28:41.625 ======================================================== 00:28:41.625 Latency(us) 00:28:41.625 Device Information : IOPS MiB/s Average min max 00:28:41.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6668.50 26.05 9618.18 1282.28 54252.71 00:28:41.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6959.50 27.19 9196.44 1071.32 54508.59 00:28:41.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6405.40 25.02 10017.26 1729.71 54627.71 00:28:41.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6449.50 25.19 9924.24 1747.94 55345.72 00:28:41.625 ======================================================== 00:28:41.625 Total : 26482.89 103.45 9678.41 1071.32 55345.72 00:28:41.625 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.625 rmmod nvme_tcp 00:28:41.625 rmmod nvme_fabrics 00:28:41.625 rmmod nvme_keyring 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 320518 ']' 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 320518 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 320518 ']' 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 320518 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 320518 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 320518' 00:28:41.625 killing process with pid 320518 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 320518 00:28:41.625 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 320518 00:28:41.626 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:41.626 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:41.626 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:41.626 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:41.626 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:28:41.626 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:41.626 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:28:41.626 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.626 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.626 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.626 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.626 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:44.917 00:28:44.917 real 0m47.164s 00:28:44.917 user 2m41.371s 00:28:44.917 sys 0m9.798s 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.917 ************************************ 00:28:44.917 END TEST nvmf_perf_adq 00:28:44.917 ************************************ 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:44.917 ************************************ 00:28:44.917 START TEST nvmf_shutdown 00:28:44.917 ************************************ 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:44.917 * Looking for test storage... 00:28:44.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:44.917 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:44.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.918 --rc genhtml_branch_coverage=1 00:28:44.918 --rc genhtml_function_coverage=1 00:28:44.918 --rc genhtml_legend=1 00:28:44.918 --rc geninfo_all_blocks=1 00:28:44.918 --rc geninfo_unexecuted_blocks=1 00:28:44.918 00:28:44.918 ' 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:44.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.918 --rc genhtml_branch_coverage=1 00:28:44.918 --rc genhtml_function_coverage=1 00:28:44.918 --rc genhtml_legend=1 00:28:44.918 --rc geninfo_all_blocks=1 00:28:44.918 --rc geninfo_unexecuted_blocks=1 00:28:44.918 00:28:44.918 ' 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:44.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.918 --rc genhtml_branch_coverage=1 00:28:44.918 --rc genhtml_function_coverage=1 00:28:44.918 --rc genhtml_legend=1 00:28:44.918 --rc geninfo_all_blocks=1 00:28:44.918 --rc geninfo_unexecuted_blocks=1 00:28:44.918 00:28:44.918 ' 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:44.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.918 --rc genhtml_branch_coverage=1 00:28:44.918 --rc genhtml_function_coverage=1 00:28:44.918 --rc genhtml_legend=1 00:28:44.918 --rc geninfo_all_blocks=1 00:28:44.918 --rc geninfo_unexecuted_blocks=1 00:28:44.918 00:28:44.918 ' 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.918 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:44.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:44.918 ************************************ 00:28:44.918 START TEST nvmf_shutdown_tc1 00:28:44.918 ************************************ 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:44.918 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:44.919 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.919 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.919 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.919 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:44.919 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:44.919 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:44.919 22:51:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:47.452 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:47.452 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:47.452 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:47.452 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:47.452 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:47.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:28:47.453 00:28:47.453 --- 10.0.0.2 ping statistics --- 00:28:47.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.453 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:28:47.453 00:28:47.453 --- 10.0.0.1 ping statistics --- 00:28:47.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.453 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=323847 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 323847 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 323847 ']' 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.453 [2024-10-11 22:51:50.419461] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:28:47.453 [2024-10-11 22:51:50.419559] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.453 [2024-10-11 22:51:50.484621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:47.453 [2024-10-11 22:51:50.533718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.453 [2024-10-11 22:51:50.533793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.453 [2024-10-11 22:51:50.533807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.453 [2024-10-11 22:51:50.533834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.453 [2024-10-11 22:51:50.533844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.453 [2024-10-11 22:51:50.535514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.453 [2024-10-11 22:51:50.535586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:47.453 [2024-10-11 22:51:50.535642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:47.453 [2024-10-11 22:51:50.535646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.453 [2024-10-11 22:51:50.687077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.453 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.711 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.712 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.712 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.712 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.712 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.712 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.712 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:47.712 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.712 22:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.712 Malloc1 00:28:47.712 [2024-10-11 22:51:50.787898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.712 Malloc2 00:28:47.712 Malloc3 00:28:47.712 Malloc4 00:28:47.712 Malloc5 00:28:47.984 Malloc6 00:28:47.984 Malloc7 00:28:47.984 Malloc8 00:28:47.984 Malloc9 00:28:47.984 Malloc10 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=324028 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 324028 /var/tmp/bdevperf.sock 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 324028 ']' 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:47.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.984 { 00:28:47.984 "params": { 00:28:47.984 "name": "Nvme$subsystem", 00:28:47.984 "trtype": "$TEST_TRANSPORT", 00:28:47.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.984 "adrfam": "ipv4", 00:28:47.984 "trsvcid": "$NVMF_PORT", 00:28:47.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.984 "hdgst": ${hdgst:-false}, 00:28:47.984 "ddgst": ${ddgst:-false} 00:28:47.984 }, 00:28:47.984 "method": "bdev_nvme_attach_controller" 00:28:47.984 } 00:28:47.984 EOF 00:28:47.984 )") 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.984 { 00:28:47.984 "params": { 00:28:47.984 "name": "Nvme$subsystem", 00:28:47.984 "trtype": "$TEST_TRANSPORT", 00:28:47.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.984 "adrfam": "ipv4", 00:28:47.984 "trsvcid": "$NVMF_PORT", 00:28:47.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.984 "hdgst": ${hdgst:-false}, 00:28:47.984 "ddgst": ${ddgst:-false} 00:28:47.984 }, 00:28:47.984 "method": "bdev_nvme_attach_controller" 00:28:47.984 } 00:28:47.984 EOF 00:28:47.984 )") 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.984 { 00:28:47.984 "params": { 00:28:47.984 "name": "Nvme$subsystem", 00:28:47.984 "trtype": "$TEST_TRANSPORT", 00:28:47.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.984 "adrfam": "ipv4", 00:28:47.984 "trsvcid": "$NVMF_PORT", 00:28:47.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.984 "hdgst": ${hdgst:-false}, 00:28:47.984 "ddgst": ${ddgst:-false} 00:28:47.984 }, 00:28:47.984 "method": "bdev_nvme_attach_controller" 00:28:47.984 } 00:28:47.984 EOF 00:28:47.984 )") 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.984 { 00:28:47.984 "params": { 00:28:47.984 "name": "Nvme$subsystem", 00:28:47.984 "trtype": "$TEST_TRANSPORT", 00:28:47.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.984 "adrfam": "ipv4", 00:28:47.984 "trsvcid": "$NVMF_PORT", 00:28:47.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.984 "hdgst": ${hdgst:-false}, 00:28:47.984 "ddgst": ${ddgst:-false} 00:28:47.984 }, 00:28:47.984 "method": "bdev_nvme_attach_controller" 00:28:47.984 } 00:28:47.984 EOF 00:28:47.984 )") 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.984 { 00:28:47.984 "params": { 00:28:47.984 "name": "Nvme$subsystem", 00:28:47.984 "trtype": "$TEST_TRANSPORT", 00:28:47.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.984 "adrfam": "ipv4", 00:28:47.984 "trsvcid": "$NVMF_PORT", 00:28:47.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.984 "hdgst": ${hdgst:-false}, 00:28:47.984 "ddgst": ${ddgst:-false} 00:28:47.984 }, 00:28:47.984 "method": "bdev_nvme_attach_controller" 00:28:47.984 } 00:28:47.984 EOF 00:28:47.984 )") 00:28:47.984 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:48.242 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:48.243 { 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme$subsystem", 00:28:48.243 "trtype": "$TEST_TRANSPORT", 00:28:48.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "$NVMF_PORT", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.243 "hdgst": ${hdgst:-false}, 00:28:48.243 "ddgst": ${ddgst:-false} 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 } 00:28:48.243 EOF 00:28:48.243 )") 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:48.243 { 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme$subsystem", 00:28:48.243 "trtype": "$TEST_TRANSPORT", 00:28:48.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "$NVMF_PORT", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.243 "hdgst": ${hdgst:-false}, 00:28:48.243 "ddgst": ${ddgst:-false} 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 } 00:28:48.243 EOF 00:28:48.243 )") 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:48.243 { 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme$subsystem", 00:28:48.243 "trtype": "$TEST_TRANSPORT", 00:28:48.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "$NVMF_PORT", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.243 "hdgst": ${hdgst:-false}, 00:28:48.243 "ddgst": ${ddgst:-false} 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 } 00:28:48.243 EOF 00:28:48.243 )") 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:48.243 { 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme$subsystem", 00:28:48.243 "trtype": "$TEST_TRANSPORT", 00:28:48.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "$NVMF_PORT", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.243 "hdgst": ${hdgst:-false}, 00:28:48.243 "ddgst": ${ddgst:-false} 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 } 00:28:48.243 EOF 00:28:48.243 )") 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:48.243 { 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme$subsystem", 00:28:48.243 "trtype": "$TEST_TRANSPORT", 00:28:48.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "$NVMF_PORT", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.243 "hdgst": ${hdgst:-false}, 00:28:48.243 "ddgst": ${ddgst:-false} 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 } 00:28:48.243 EOF 00:28:48.243 )") 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:28:48.243 22:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme1", 00:28:48.243 "trtype": "tcp", 00:28:48.243 "traddr": "10.0.0.2", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "4420", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:48.243 "hdgst": false, 00:28:48.243 "ddgst": false 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 },{ 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme2", 00:28:48.243 "trtype": "tcp", 00:28:48.243 "traddr": "10.0.0.2", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "4420", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:48.243 "hdgst": false, 00:28:48.243 "ddgst": false 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 },{ 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme3", 00:28:48.243 "trtype": "tcp", 00:28:48.243 "traddr": "10.0.0.2", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "4420", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:48.243 "hdgst": false, 00:28:48.243 "ddgst": false 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 },{ 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme4", 00:28:48.243 "trtype": "tcp", 00:28:48.243 "traddr": "10.0.0.2", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "4420", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:48.243 "hdgst": false, 00:28:48.243 "ddgst": false 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 },{ 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme5", 00:28:48.243 "trtype": "tcp", 00:28:48.243 "traddr": "10.0.0.2", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "4420", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:48.243 "hdgst": false, 00:28:48.243 "ddgst": false 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 },{ 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme6", 00:28:48.243 "trtype": "tcp", 00:28:48.243 "traddr": "10.0.0.2", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "4420", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:48.243 "hdgst": false, 00:28:48.243 "ddgst": false 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 },{ 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme7", 00:28:48.243 "trtype": "tcp", 00:28:48.243 "traddr": "10.0.0.2", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "4420", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:48.243 "hdgst": false, 00:28:48.243 "ddgst": false 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 },{ 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme8", 00:28:48.243 "trtype": "tcp", 00:28:48.243 "traddr": "10.0.0.2", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "4420", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:48.243 "hdgst": false, 00:28:48.243 "ddgst": false 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 },{ 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme9", 00:28:48.243 "trtype": "tcp", 00:28:48.243 "traddr": "10.0.0.2", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "4420", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:48.243 "hdgst": false, 00:28:48.243 "ddgst": false 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 },{ 00:28:48.243 "params": { 00:28:48.243 "name": "Nvme10", 00:28:48.243 "trtype": "tcp", 00:28:48.243 "traddr": "10.0.0.2", 00:28:48.243 "adrfam": "ipv4", 00:28:48.243 "trsvcid": "4420", 00:28:48.243 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:48.243 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:48.243 "hdgst": false, 00:28:48.243 "ddgst": false 00:28:48.243 }, 00:28:48.243 "method": "bdev_nvme_attach_controller" 00:28:48.243 }' 00:28:48.243 [2024-10-11 22:51:51.279128] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:28:48.243 [2024-10-11 22:51:51.279207] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:48.243 [2024-10-11 22:51:51.344209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.243 [2024-10-11 22:51:51.391212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.147 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:50.147 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:50.147 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:50.147 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.147 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.147 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.147 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 324028 00:28:50.147 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:50.147 22:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:51.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 324028 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 323847 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:51.081 { 00:28:51.081 "params": { 00:28:51.081 "name": "Nvme$subsystem", 00:28:51.081 "trtype": "$TEST_TRANSPORT", 00:28:51.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.081 "adrfam": "ipv4", 00:28:51.081 "trsvcid": "$NVMF_PORT", 00:28:51.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.081 "hdgst": ${hdgst:-false}, 00:28:51.081 "ddgst": ${ddgst:-false} 00:28:51.081 }, 00:28:51.081 "method": "bdev_nvme_attach_controller" 00:28:51.081 } 00:28:51.081 EOF 00:28:51.081 )") 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:51.081 { 00:28:51.081 "params": { 00:28:51.081 "name": "Nvme$subsystem", 00:28:51.081 "trtype": "$TEST_TRANSPORT", 00:28:51.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.081 "adrfam": "ipv4", 00:28:51.081 "trsvcid": "$NVMF_PORT", 00:28:51.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.081 "hdgst": ${hdgst:-false}, 00:28:51.081 "ddgst": ${ddgst:-false} 00:28:51.081 }, 00:28:51.081 "method": "bdev_nvme_attach_controller" 00:28:51.081 } 00:28:51.081 EOF 00:28:51.081 )") 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:51.081 { 00:28:51.081 "params": { 00:28:51.081 "name": "Nvme$subsystem", 00:28:51.081 "trtype": "$TEST_TRANSPORT", 00:28:51.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.081 "adrfam": "ipv4", 00:28:51.081 "trsvcid": "$NVMF_PORT", 00:28:51.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.081 "hdgst": ${hdgst:-false}, 00:28:51.081 "ddgst": ${ddgst:-false} 00:28:51.081 }, 00:28:51.081 "method": "bdev_nvme_attach_controller" 00:28:51.081 } 00:28:51.081 EOF 00:28:51.081 )") 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:51.081 { 00:28:51.081 "params": { 00:28:51.081 "name": "Nvme$subsystem", 00:28:51.081 "trtype": "$TEST_TRANSPORT", 00:28:51.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.081 "adrfam": "ipv4", 00:28:51.081 "trsvcid": "$NVMF_PORT", 00:28:51.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.081 "hdgst": ${hdgst:-false}, 00:28:51.081 "ddgst": ${ddgst:-false} 00:28:51.081 }, 00:28:51.081 "method": "bdev_nvme_attach_controller" 00:28:51.081 } 00:28:51.081 EOF 00:28:51.081 )") 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:51.081 { 00:28:51.081 "params": { 00:28:51.081 "name": "Nvme$subsystem", 00:28:51.081 "trtype": "$TEST_TRANSPORT", 00:28:51.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.081 "adrfam": "ipv4", 00:28:51.081 "trsvcid": "$NVMF_PORT", 00:28:51.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.081 "hdgst": ${hdgst:-false}, 00:28:51.081 "ddgst": ${ddgst:-false} 00:28:51.081 }, 00:28:51.081 "method": "bdev_nvme_attach_controller" 00:28:51.081 } 00:28:51.081 EOF 00:28:51.081 )") 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:51.081 { 00:28:51.081 "params": { 00:28:51.081 "name": "Nvme$subsystem", 00:28:51.081 "trtype": "$TEST_TRANSPORT", 00:28:51.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.081 "adrfam": "ipv4", 00:28:51.081 "trsvcid": "$NVMF_PORT", 00:28:51.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.081 "hdgst": ${hdgst:-false}, 00:28:51.081 "ddgst": ${ddgst:-false} 00:28:51.081 }, 00:28:51.081 "method": "bdev_nvme_attach_controller" 00:28:51.081 } 00:28:51.081 EOF 00:28:51.081 )") 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:51.081 { 00:28:51.081 "params": { 00:28:51.081 "name": "Nvme$subsystem", 00:28:51.081 "trtype": "$TEST_TRANSPORT", 00:28:51.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.081 "adrfam": "ipv4", 00:28:51.081 "trsvcid": "$NVMF_PORT", 00:28:51.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.081 "hdgst": ${hdgst:-false}, 00:28:51.081 "ddgst": ${ddgst:-false} 00:28:51.081 }, 00:28:51.081 "method": "bdev_nvme_attach_controller" 00:28:51.081 } 00:28:51.081 EOF 00:28:51.081 )") 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:51.081 { 00:28:51.081 "params": { 00:28:51.081 "name": "Nvme$subsystem", 00:28:51.081 "trtype": "$TEST_TRANSPORT", 00:28:51.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.081 "adrfam": "ipv4", 00:28:51.081 "trsvcid": "$NVMF_PORT", 00:28:51.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.081 "hdgst": ${hdgst:-false}, 00:28:51.081 "ddgst": ${ddgst:-false} 00:28:51.081 }, 00:28:51.081 "method": "bdev_nvme_attach_controller" 00:28:51.081 } 00:28:51.081 EOF 00:28:51.081 )") 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:51.081 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:51.081 { 00:28:51.081 "params": { 00:28:51.081 "name": "Nvme$subsystem", 00:28:51.081 "trtype": "$TEST_TRANSPORT", 00:28:51.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.081 "adrfam": "ipv4", 00:28:51.081 "trsvcid": "$NVMF_PORT", 00:28:51.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.081 "hdgst": ${hdgst:-false}, 00:28:51.081 "ddgst": ${ddgst:-false} 00:28:51.081 }, 00:28:51.081 "method": "bdev_nvme_attach_controller" 00:28:51.081 } 00:28:51.082 EOF 00:28:51.082 )") 00:28:51.082 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:51.082 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:51.082 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:51.082 { 00:28:51.082 "params": { 00:28:51.082 "name": "Nvme$subsystem", 00:28:51.082 "trtype": "$TEST_TRANSPORT", 00:28:51.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.082 "adrfam": "ipv4", 00:28:51.082 "trsvcid": "$NVMF_PORT", 00:28:51.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.082 "hdgst": ${hdgst:-false}, 00:28:51.082 "ddgst": ${ddgst:-false} 00:28:51.082 }, 00:28:51.082 "method": "bdev_nvme_attach_controller" 00:28:51.082 } 00:28:51.082 EOF 00:28:51.082 )") 00:28:51.082 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:51.082 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:28:51.082 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:28:51.082 22:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:51.082 "params": { 00:28:51.082 "name": "Nvme1", 00:28:51.082 "trtype": "tcp", 00:28:51.082 "traddr": "10.0.0.2", 00:28:51.082 "adrfam": "ipv4", 00:28:51.082 "trsvcid": "4420", 00:28:51.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:51.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:51.082 "hdgst": false, 00:28:51.082 "ddgst": false 00:28:51.082 }, 00:28:51.082 "method": "bdev_nvme_attach_controller" 00:28:51.082 },{ 00:28:51.082 "params": { 00:28:51.082 "name": "Nvme2", 00:28:51.082 "trtype": "tcp", 00:28:51.082 "traddr": "10.0.0.2", 00:28:51.082 "adrfam": "ipv4", 00:28:51.082 "trsvcid": "4420", 00:28:51.082 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:51.082 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:51.082 "hdgst": false, 00:28:51.082 "ddgst": false 00:28:51.082 }, 00:28:51.082 "method": "bdev_nvme_attach_controller" 00:28:51.082 },{ 00:28:51.082 "params": { 00:28:51.082 "name": "Nvme3", 00:28:51.082 "trtype": "tcp", 00:28:51.082 "traddr": "10.0.0.2", 00:28:51.082 "adrfam": "ipv4", 00:28:51.082 "trsvcid": "4420", 00:28:51.082 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:51.082 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:51.082 "hdgst": false, 00:28:51.082 "ddgst": false 00:28:51.082 }, 00:28:51.082 "method": "bdev_nvme_attach_controller" 00:28:51.082 },{ 00:28:51.082 "params": { 00:28:51.082 "name": "Nvme4", 00:28:51.082 "trtype": "tcp", 00:28:51.082 "traddr": "10.0.0.2", 00:28:51.082 "adrfam": "ipv4", 00:28:51.082 "trsvcid": "4420", 00:28:51.082 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:51.082 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:51.082 "hdgst": false, 00:28:51.082 "ddgst": false 00:28:51.082 }, 00:28:51.082 "method": "bdev_nvme_attach_controller" 00:28:51.082 },{ 00:28:51.082 "params": { 00:28:51.082 "name": "Nvme5", 00:28:51.082 "trtype": "tcp", 00:28:51.082 "traddr": "10.0.0.2", 00:28:51.082 "adrfam": "ipv4", 00:28:51.082 "trsvcid": "4420", 00:28:51.082 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:51.082 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:51.082 "hdgst": false, 00:28:51.082 "ddgst": false 00:28:51.082 }, 00:28:51.082 "method": "bdev_nvme_attach_controller" 00:28:51.082 },{ 00:28:51.082 "params": { 00:28:51.082 "name": "Nvme6", 00:28:51.082 "trtype": "tcp", 00:28:51.082 "traddr": "10.0.0.2", 00:28:51.082 "adrfam": "ipv4", 00:28:51.082 "trsvcid": "4420", 00:28:51.082 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:51.082 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:51.082 "hdgst": false, 00:28:51.082 "ddgst": false 00:28:51.082 }, 00:28:51.082 "method": "bdev_nvme_attach_controller" 00:28:51.082 },{ 00:28:51.082 "params": { 00:28:51.082 "name": "Nvme7", 00:28:51.082 "trtype": "tcp", 00:28:51.082 "traddr": "10.0.0.2", 00:28:51.082 "adrfam": "ipv4", 00:28:51.082 "trsvcid": "4420", 00:28:51.082 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:51.082 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:51.082 "hdgst": false, 00:28:51.082 "ddgst": false 00:28:51.082 }, 00:28:51.082 "method": "bdev_nvme_attach_controller" 00:28:51.082 },{ 00:28:51.082 "params": { 00:28:51.082 "name": "Nvme8", 00:28:51.082 "trtype": "tcp", 00:28:51.082 "traddr": "10.0.0.2", 00:28:51.082 "adrfam": "ipv4", 00:28:51.082 "trsvcid": "4420", 00:28:51.082 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:51.082 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:51.082 "hdgst": false, 00:28:51.082 "ddgst": false 00:28:51.082 }, 00:28:51.082 "method": "bdev_nvme_attach_controller" 00:28:51.082 },{ 00:28:51.082 "params": { 00:28:51.082 "name": "Nvme9", 00:28:51.082 "trtype": "tcp", 00:28:51.082 "traddr": "10.0.0.2", 00:28:51.082 "adrfam": "ipv4", 00:28:51.082 "trsvcid": "4420", 00:28:51.082 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:51.082 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:51.082 "hdgst": false, 00:28:51.082 "ddgst": false 00:28:51.082 }, 00:28:51.082 "method": "bdev_nvme_attach_controller" 00:28:51.082 },{ 00:28:51.082 "params": { 00:28:51.082 "name": "Nvme10", 00:28:51.082 "trtype": "tcp", 00:28:51.082 "traddr": "10.0.0.2", 00:28:51.082 "adrfam": "ipv4", 00:28:51.082 "trsvcid": "4420", 00:28:51.082 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:51.082 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:51.082 "hdgst": false, 00:28:51.082 "ddgst": false 00:28:51.082 }, 00:28:51.082 "method": "bdev_nvme_attach_controller" 00:28:51.082 }' 00:28:51.082 [2024-10-11 22:51:54.342607] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:28:51.082 [2024-10-11 22:51:54.342689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324446 ] 00:28:51.340 [2024-10-11 22:51:54.406912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.340 [2024-10-11 22:51:54.456078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.713 Running I/O for 1 seconds... 00:28:53.904 1800.00 IOPS, 112.50 MiB/s 00:28:53.904 Latency(us) 00:28:53.904 [2024-10-11T20:51:57.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.904 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.904 Verification LBA range: start 0x0 length 0x400 00:28:53.904 Nvme1n1 : 1.08 240.53 15.03 0.00 0.00 257640.13 20097.71 243891.01 00:28:53.904 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.904 Verification LBA range: start 0x0 length 0x400 00:28:53.904 Nvme2n1 : 1.15 221.69 13.86 0.00 0.00 281130.29 20291.89 254765.13 00:28:53.904 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.904 Verification LBA range: start 0x0 length 0x400 00:28:53.904 Nvme3n1 : 1.09 235.64 14.73 0.00 0.00 259515.73 29127.11 240784.12 00:28:53.904 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.904 Verification LBA range: start 0x0 length 0x400 00:28:53.904 Nvme4n1 : 1.09 234.82 14.68 0.00 0.00 255995.26 15631.55 253211.69 00:28:53.904 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.904 Verification LBA range: start 0x0 length 0x400 00:28:53.904 Nvme5n1 : 1.12 228.47 14.28 0.00 0.00 259141.03 26991.12 245444.46 00:28:53.904 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.904 Verification LBA range: start 0x0 length 0x400 00:28:53.904 Nvme6n1 : 1.17 219.45 13.72 0.00 0.00 266196.95 21262.79 267192.70 00:28:53.904 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.904 Verification LBA range: start 0x0 length 0x400 00:28:53.904 Nvme7n1 : 1.13 226.08 14.13 0.00 0.00 253201.64 19903.53 254765.13 00:28:53.904 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.904 Verification LBA range: start 0x0 length 0x400 00:28:53.904 Nvme8n1 : 1.18 275.14 17.20 0.00 0.00 205066.40 3422.44 253211.69 00:28:53.904 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.904 Verification LBA range: start 0x0 length 0x400 00:28:53.904 Nvme9n1 : 1.18 270.61 16.91 0.00 0.00 205103.71 6747.78 251658.24 00:28:53.904 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.904 Verification LBA range: start 0x0 length 0x400 00:28:53.904 Nvme10n1 : 1.18 217.75 13.61 0.00 0.00 250377.10 22233.69 281173.71 00:28:53.904 [2024-10-11T20:51:57.172Z] =================================================================================================================== 00:28:53.904 [2024-10-11T20:51:57.172Z] Total : 2370.17 148.14 0.00 0.00 247182.49 3422.44 281173.71 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:54.163 rmmod nvme_tcp 00:28:54.163 rmmod nvme_fabrics 00:28:54.163 rmmod nvme_keyring 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 323847 ']' 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 323847 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 323847 ']' 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 323847 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 323847 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 323847' 00:28:54.163 killing process with pid 323847 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 323847 00:28:54.163 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 323847 00:28:54.731 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:54.731 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:54.731 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:54.731 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:54.731 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:28:54.731 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:28:54.731 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:54.731 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.731 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:54.731 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.731 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.731 22:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.634 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:56.634 00:28:56.634 real 0m11.750s 00:28:56.634 user 0m33.599s 00:28:56.634 sys 0m3.242s 00:28:56.634 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:56.634 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.634 ************************************ 00:28:56.634 END TEST nvmf_shutdown_tc1 00:28:56.634 ************************************ 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:56.635 ************************************ 00:28:56.635 START TEST nvmf_shutdown_tc2 00:28:56.635 ************************************ 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:56.635 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:56.635 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:56.635 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:56.635 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:56.635 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.636 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.636 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:56.636 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:56.636 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.636 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:56.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:28:56.894 00:28:56.894 --- 10.0.0.2 ping statistics --- 00:28:56.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.894 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:28:56.894 00:28:56.894 --- 10.0.0.1 ping statistics --- 00:28:56.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.894 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:56.894 22:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=325208 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 325208 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 325208 ']' 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:56.894 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.894 [2024-10-11 22:52:00.081647] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:28:56.894 [2024-10-11 22:52:00.081765] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.894 [2024-10-11 22:52:00.147701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:57.152 [2024-10-11 22:52:00.195768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.152 [2024-10-11 22:52:00.195838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.152 [2024-10-11 22:52:00.195861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.152 [2024-10-11 22:52:00.195872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.152 [2024-10-11 22:52:00.195881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.152 [2024-10-11 22:52:00.197374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.152 [2024-10-11 22:52:00.197484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.152 [2024-10-11 22:52:00.197588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:57.152 [2024-10-11 22:52:00.197592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.152 [2024-10-11 22:52:00.335070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.152 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.152 Malloc1 00:28:57.410 [2024-10-11 22:52:00.422206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.410 Malloc2 00:28:57.410 Malloc3 00:28:57.410 Malloc4 00:28:57.410 Malloc5 00:28:57.410 Malloc6 00:28:57.668 Malloc7 00:28:57.668 Malloc8 00:28:57.668 Malloc9 00:28:57.668 Malloc10 00:28:57.668 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.668 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:57.668 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:57.668 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.668 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=325337 00:28:57.668 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 325337 /var/tmp/bdevperf.sock 00:28:57.668 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 325337 ']' 00:28:57.668 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:57.668 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:57.668 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:57.668 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:57.668 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:57.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.669 { 00:28:57.669 "params": { 00:28:57.669 "name": "Nvme$subsystem", 00:28:57.669 "trtype": "$TEST_TRANSPORT", 00:28:57.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.669 "adrfam": "ipv4", 00:28:57.669 "trsvcid": "$NVMF_PORT", 00:28:57.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.669 "hdgst": ${hdgst:-false}, 00:28:57.669 "ddgst": ${ddgst:-false} 00:28:57.669 }, 00:28:57.669 "method": "bdev_nvme_attach_controller" 00:28:57.669 } 00:28:57.669 EOF 00:28:57.669 )") 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.669 { 00:28:57.669 "params": { 00:28:57.669 "name": "Nvme$subsystem", 00:28:57.669 "trtype": "$TEST_TRANSPORT", 00:28:57.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.669 "adrfam": "ipv4", 00:28:57.669 "trsvcid": "$NVMF_PORT", 00:28:57.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.669 "hdgst": ${hdgst:-false}, 00:28:57.669 "ddgst": ${ddgst:-false} 00:28:57.669 }, 00:28:57.669 "method": "bdev_nvme_attach_controller" 00:28:57.669 } 00:28:57.669 EOF 00:28:57.669 )") 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.669 { 00:28:57.669 "params": { 00:28:57.669 "name": "Nvme$subsystem", 00:28:57.669 "trtype": "$TEST_TRANSPORT", 00:28:57.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.669 "adrfam": "ipv4", 00:28:57.669 "trsvcid": "$NVMF_PORT", 00:28:57.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.669 "hdgst": ${hdgst:-false}, 00:28:57.669 "ddgst": ${ddgst:-false} 00:28:57.669 }, 00:28:57.669 "method": "bdev_nvme_attach_controller" 00:28:57.669 } 00:28:57.669 EOF 00:28:57.669 )") 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.669 { 00:28:57.669 "params": { 00:28:57.669 "name": "Nvme$subsystem", 00:28:57.669 "trtype": "$TEST_TRANSPORT", 00:28:57.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.669 "adrfam": "ipv4", 00:28:57.669 "trsvcid": "$NVMF_PORT", 00:28:57.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.669 "hdgst": ${hdgst:-false}, 00:28:57.669 "ddgst": ${ddgst:-false} 00:28:57.669 }, 00:28:57.669 "method": "bdev_nvme_attach_controller" 00:28:57.669 } 00:28:57.669 EOF 00:28:57.669 )") 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.669 { 00:28:57.669 "params": { 00:28:57.669 "name": "Nvme$subsystem", 00:28:57.669 "trtype": "$TEST_TRANSPORT", 00:28:57.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.669 "adrfam": "ipv4", 00:28:57.669 "trsvcid": "$NVMF_PORT", 00:28:57.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.669 "hdgst": ${hdgst:-false}, 00:28:57.669 "ddgst": ${ddgst:-false} 00:28:57.669 }, 00:28:57.669 "method": "bdev_nvme_attach_controller" 00:28:57.669 } 00:28:57.669 EOF 00:28:57.669 )") 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.669 { 00:28:57.669 "params": { 00:28:57.669 "name": "Nvme$subsystem", 00:28:57.669 "trtype": "$TEST_TRANSPORT", 00:28:57.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.669 "adrfam": "ipv4", 00:28:57.669 "trsvcid": "$NVMF_PORT", 00:28:57.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.669 "hdgst": ${hdgst:-false}, 00:28:57.669 "ddgst": ${ddgst:-false} 00:28:57.669 }, 00:28:57.669 "method": "bdev_nvme_attach_controller" 00:28:57.669 } 00:28:57.669 EOF 00:28:57.669 )") 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.669 { 00:28:57.669 "params": { 00:28:57.669 "name": "Nvme$subsystem", 00:28:57.669 "trtype": "$TEST_TRANSPORT", 00:28:57.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.669 "adrfam": "ipv4", 00:28:57.669 "trsvcid": "$NVMF_PORT", 00:28:57.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.669 "hdgst": ${hdgst:-false}, 00:28:57.669 "ddgst": ${ddgst:-false} 00:28:57.669 }, 00:28:57.669 "method": "bdev_nvme_attach_controller" 00:28:57.669 } 00:28:57.669 EOF 00:28:57.669 )") 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.669 { 00:28:57.669 "params": { 00:28:57.669 "name": "Nvme$subsystem", 00:28:57.669 "trtype": "$TEST_TRANSPORT", 00:28:57.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.669 "adrfam": "ipv4", 00:28:57.669 "trsvcid": "$NVMF_PORT", 00:28:57.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.669 "hdgst": ${hdgst:-false}, 00:28:57.669 "ddgst": ${ddgst:-false} 00:28:57.669 }, 00:28:57.669 "method": "bdev_nvme_attach_controller" 00:28:57.669 } 00:28:57.669 EOF 00:28:57.669 )") 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.669 { 00:28:57.669 "params": { 00:28:57.669 "name": "Nvme$subsystem", 00:28:57.669 "trtype": "$TEST_TRANSPORT", 00:28:57.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.669 "adrfam": "ipv4", 00:28:57.669 "trsvcid": "$NVMF_PORT", 00:28:57.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.669 "hdgst": ${hdgst:-false}, 00:28:57.669 "ddgst": ${ddgst:-false} 00:28:57.669 }, 00:28:57.669 "method": "bdev_nvme_attach_controller" 00:28:57.669 } 00:28:57.669 EOF 00:28:57.669 )") 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.669 { 00:28:57.669 "params": { 00:28:57.669 "name": "Nvme$subsystem", 00:28:57.669 "trtype": "$TEST_TRANSPORT", 00:28:57.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.669 "adrfam": "ipv4", 00:28:57.669 "trsvcid": "$NVMF_PORT", 00:28:57.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.669 "hdgst": ${hdgst:-false}, 00:28:57.669 "ddgst": ${ddgst:-false} 00:28:57.669 }, 00:28:57.669 "method": "bdev_nvme_attach_controller" 00:28:57.669 } 00:28:57.669 EOF 00:28:57.669 )") 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:28:57.669 22:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:57.669 "params": { 00:28:57.670 "name": "Nvme1", 00:28:57.670 "trtype": "tcp", 00:28:57.670 "traddr": "10.0.0.2", 00:28:57.670 "adrfam": "ipv4", 00:28:57.670 "trsvcid": "4420", 00:28:57.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:57.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:57.670 "hdgst": false, 00:28:57.670 "ddgst": false 00:28:57.670 }, 00:28:57.670 "method": "bdev_nvme_attach_controller" 00:28:57.670 },{ 00:28:57.670 "params": { 00:28:57.670 "name": "Nvme2", 00:28:57.670 "trtype": "tcp", 00:28:57.670 "traddr": "10.0.0.2", 00:28:57.670 "adrfam": "ipv4", 00:28:57.670 "trsvcid": "4420", 00:28:57.670 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:57.670 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:57.670 "hdgst": false, 00:28:57.670 "ddgst": false 00:28:57.670 }, 00:28:57.670 "method": "bdev_nvme_attach_controller" 00:28:57.670 },{ 00:28:57.670 "params": { 00:28:57.670 "name": "Nvme3", 00:28:57.670 "trtype": "tcp", 00:28:57.670 "traddr": "10.0.0.2", 00:28:57.670 "adrfam": "ipv4", 00:28:57.670 "trsvcid": "4420", 00:28:57.670 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:57.670 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:57.670 "hdgst": false, 00:28:57.670 "ddgst": false 00:28:57.670 }, 00:28:57.670 "method": "bdev_nvme_attach_controller" 00:28:57.670 },{ 00:28:57.670 "params": { 00:28:57.670 "name": "Nvme4", 00:28:57.670 "trtype": "tcp", 00:28:57.670 "traddr": "10.0.0.2", 00:28:57.670 "adrfam": "ipv4", 00:28:57.670 "trsvcid": "4420", 00:28:57.670 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:57.670 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:57.670 "hdgst": false, 00:28:57.670 "ddgst": false 00:28:57.670 }, 00:28:57.670 "method": "bdev_nvme_attach_controller" 00:28:57.670 },{ 00:28:57.670 "params": { 00:28:57.670 "name": "Nvme5", 00:28:57.670 "trtype": "tcp", 00:28:57.670 "traddr": "10.0.0.2", 00:28:57.670 "adrfam": "ipv4", 00:28:57.670 "trsvcid": "4420", 00:28:57.670 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:57.670 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:57.670 "hdgst": false, 00:28:57.670 "ddgst": false 00:28:57.670 }, 00:28:57.670 "method": "bdev_nvme_attach_controller" 00:28:57.670 },{ 00:28:57.670 "params": { 00:28:57.670 "name": "Nvme6", 00:28:57.670 "trtype": "tcp", 00:28:57.670 "traddr": "10.0.0.2", 00:28:57.670 "adrfam": "ipv4", 00:28:57.670 "trsvcid": "4420", 00:28:57.670 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:57.670 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:57.670 "hdgst": false, 00:28:57.670 "ddgst": false 00:28:57.670 }, 00:28:57.670 "method": "bdev_nvme_attach_controller" 00:28:57.670 },{ 00:28:57.670 "params": { 00:28:57.670 "name": "Nvme7", 00:28:57.670 "trtype": "tcp", 00:28:57.670 "traddr": "10.0.0.2", 00:28:57.670 "adrfam": "ipv4", 00:28:57.670 "trsvcid": "4420", 00:28:57.670 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:57.670 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:57.670 "hdgst": false, 00:28:57.670 "ddgst": false 00:28:57.670 }, 00:28:57.670 "method": "bdev_nvme_attach_controller" 00:28:57.670 },{ 00:28:57.670 "params": { 00:28:57.670 "name": "Nvme8", 00:28:57.670 "trtype": "tcp", 00:28:57.670 "traddr": "10.0.0.2", 00:28:57.670 "adrfam": "ipv4", 00:28:57.670 "trsvcid": "4420", 00:28:57.670 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:57.670 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:57.670 "hdgst": false, 00:28:57.670 "ddgst": false 00:28:57.670 }, 00:28:57.670 "method": "bdev_nvme_attach_controller" 00:28:57.670 },{ 00:28:57.670 "params": { 00:28:57.670 "name": "Nvme9", 00:28:57.670 "trtype": "tcp", 00:28:57.670 "traddr": "10.0.0.2", 00:28:57.670 "adrfam": "ipv4", 00:28:57.670 "trsvcid": "4420", 00:28:57.670 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:57.670 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:57.670 "hdgst": false, 00:28:57.670 "ddgst": false 00:28:57.670 }, 00:28:57.670 "method": "bdev_nvme_attach_controller" 00:28:57.670 },{ 00:28:57.670 "params": { 00:28:57.670 "name": "Nvme10", 00:28:57.670 "trtype": "tcp", 00:28:57.670 "traddr": "10.0.0.2", 00:28:57.670 "adrfam": "ipv4", 00:28:57.670 "trsvcid": "4420", 00:28:57.670 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:57.670 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:57.670 "hdgst": false, 00:28:57.670 "ddgst": false 00:28:57.670 }, 00:28:57.670 "method": "bdev_nvme_attach_controller" 00:28:57.670 }' 00:28:57.670 [2024-10-11 22:52:00.915725] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:28:57.670 [2024-10-11 22:52:00.915804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325337 ] 00:28:57.928 [2024-10-11 22:52:00.979223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.928 [2024-10-11 22:52:01.027842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.298 Running I/O for 10 seconds... 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:59.864 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 325337 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 325337 ']' 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 325337 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 325337 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 325337' 00:29:00.121 killing process with pid 325337 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 325337 00:29:00.121 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 325337 00:29:00.379 Received shutdown signal, test time was about 0.918888 seconds 00:29:00.379 00:29:00.379 Latency(us) 00:29:00.379 [2024-10-11T20:52:03.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.379 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.379 Verification LBA range: start 0x0 length 0x400 00:29:00.379 Nvme1n1 : 0.92 278.92 17.43 0.00 0.00 226538.76 17864.63 257872.02 00:29:00.379 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.379 Verification LBA range: start 0x0 length 0x400 00:29:00.379 Nvme2n1 : 0.90 213.66 13.35 0.00 0.00 289858.75 25437.68 262532.36 00:29:00.379 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.379 Verification LBA range: start 0x0 length 0x400 00:29:00.379 Nvme3n1 : 0.87 221.12 13.82 0.00 0.00 273287.14 18544.26 257872.02 00:29:00.379 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.379 Verification LBA range: start 0x0 length 0x400 00:29:00.379 Nvme4n1 : 0.91 279.79 17.49 0.00 0.00 212158.39 20874.43 259425.47 00:29:00.379 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.379 Verification LBA range: start 0x0 length 0x400 00:29:00.379 Nvme5n1 : 0.88 217.85 13.62 0.00 0.00 265595.76 37476.88 242337.56 00:29:00.379 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.379 Verification LBA range: start 0x0 length 0x400 00:29:00.379 Nvme6n1 : 0.88 222.04 13.88 0.00 0.00 252883.94 6140.97 257872.02 00:29:00.379 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.379 Verification LBA range: start 0x0 length 0x400 00:29:00.379 Nvme7n1 : 0.89 221.42 13.84 0.00 0.00 248818.04 3859.34 240784.12 00:29:00.379 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.379 Verification LBA range: start 0x0 length 0x400 00:29:00.379 Nvme8n1 : 0.90 218.75 13.67 0.00 0.00 245952.64 5582.70 260978.92 00:29:00.379 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.379 Verification LBA range: start 0x0 length 0x400 00:29:00.379 Nvme9n1 : 0.91 211.81 13.24 0.00 0.00 250222.36 19320.98 270299.59 00:29:00.379 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.379 Verification LBA range: start 0x0 length 0x400 00:29:00.379 Nvme10n1 : 0.91 210.49 13.16 0.00 0.00 246235.34 22039.51 292047.83 00:29:00.379 [2024-10-11T20:52:03.647Z] =================================================================================================================== 00:29:00.379 [2024-10-11T20:52:03.647Z] Total : 2295.85 143.49 0.00 0.00 249165.77 3859.34 292047.83 00:29:00.379 22:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 325208 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.749 rmmod nvme_tcp 00:29:01.749 rmmod nvme_fabrics 00:29:01.749 rmmod nvme_keyring 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 325208 ']' 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 325208 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 325208 ']' 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 325208 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 325208 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:01.749 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:01.750 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 325208' 00:29:01.750 killing process with pid 325208 00:29:01.750 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 325208 00:29:01.750 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 325208 00:29:02.008 22:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:02.008 22:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:02.008 22:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:02.008 22:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:02.008 22:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:29:02.008 22:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:02.008 22:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:29:02.008 22:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.008 22:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.008 22:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.008 22:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.008 22:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.547 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.547 00:29:04.547 real 0m7.413s 00:29:04.547 user 0m22.328s 00:29:04.547 sys 0m1.452s 00:29:04.547 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.547 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.547 ************************************ 00:29:04.547 END TEST nvmf_shutdown_tc2 00:29:04.547 ************************************ 00:29:04.547 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:04.547 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:04.547 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.547 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:04.547 ************************************ 00:29:04.547 START TEST nvmf_shutdown_tc3 00:29:04.547 ************************************ 00:29:04.547 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:29:04.547 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:04.547 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:04.548 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:04.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:04.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:04.548 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.548 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:29:04.549 00:29:04.549 --- 10.0.0.2 ping statistics --- 00:29:04.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.549 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:29:04.549 00:29:04.549 --- 10.0.0.1 ping statistics --- 00:29:04.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.549 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=326176 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 326176 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 326176 ']' 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:04.549 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.549 [2024-10-11 22:52:07.587414] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:29:04.549 [2024-10-11 22:52:07.587495] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.549 [2024-10-11 22:52:07.660590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.549 [2024-10-11 22:52:07.708002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.549 [2024-10-11 22:52:07.708059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.549 [2024-10-11 22:52:07.708086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.549 [2024-10-11 22:52:07.708097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.549 [2024-10-11 22:52:07.708106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.549 [2024-10-11 22:52:07.712588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.549 [2024-10-11 22:52:07.712670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.549 [2024-10-11 22:52:07.712736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:04.549 [2024-10-11 22:52:07.712740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.808 [2024-10-11 22:52:07.856291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.808 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.808 Malloc1 00:29:04.808 [2024-10-11 22:52:07.951585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.808 Malloc2 00:29:04.808 Malloc3 00:29:04.808 Malloc4 00:29:05.066 Malloc5 00:29:05.066 Malloc6 00:29:05.066 Malloc7 00:29:05.067 Malloc8 00:29:05.067 Malloc9 00:29:05.326 Malloc10 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=326357 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 326357 /var/tmp/bdevperf.sock 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 326357 ']' 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:05.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:05.326 { 00:29:05.326 "params": { 00:29:05.326 "name": "Nvme$subsystem", 00:29:05.326 "trtype": "$TEST_TRANSPORT", 00:29:05.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.326 "adrfam": "ipv4", 00:29:05.326 "trsvcid": "$NVMF_PORT", 00:29:05.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.326 "hdgst": ${hdgst:-false}, 00:29:05.326 "ddgst": ${ddgst:-false} 00:29:05.326 }, 00:29:05.326 "method": "bdev_nvme_attach_controller" 00:29:05.326 } 00:29:05.326 EOF 00:29:05.326 )") 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:05.326 { 00:29:05.326 "params": { 00:29:05.326 "name": "Nvme$subsystem", 00:29:05.326 "trtype": "$TEST_TRANSPORT", 00:29:05.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.326 "adrfam": "ipv4", 00:29:05.326 "trsvcid": "$NVMF_PORT", 00:29:05.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.326 "hdgst": ${hdgst:-false}, 00:29:05.326 "ddgst": ${ddgst:-false} 00:29:05.326 }, 00:29:05.326 "method": "bdev_nvme_attach_controller" 00:29:05.326 } 00:29:05.326 EOF 00:29:05.326 )") 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:05.326 { 00:29:05.326 "params": { 00:29:05.326 "name": "Nvme$subsystem", 00:29:05.326 "trtype": "$TEST_TRANSPORT", 00:29:05.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.326 "adrfam": "ipv4", 00:29:05.326 "trsvcid": "$NVMF_PORT", 00:29:05.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.326 "hdgst": ${hdgst:-false}, 00:29:05.326 "ddgst": ${ddgst:-false} 00:29:05.326 }, 00:29:05.326 "method": "bdev_nvme_attach_controller" 00:29:05.326 } 00:29:05.326 EOF 00:29:05.326 )") 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:05.326 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:05.326 { 00:29:05.326 "params": { 00:29:05.326 "name": "Nvme$subsystem", 00:29:05.326 "trtype": "$TEST_TRANSPORT", 00:29:05.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.326 "adrfam": "ipv4", 00:29:05.326 "trsvcid": "$NVMF_PORT", 00:29:05.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.326 "hdgst": ${hdgst:-false}, 00:29:05.326 "ddgst": ${ddgst:-false} 00:29:05.326 }, 00:29:05.326 "method": "bdev_nvme_attach_controller" 00:29:05.326 } 00:29:05.327 EOF 00:29:05.327 )") 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:05.327 { 00:29:05.327 "params": { 00:29:05.327 "name": "Nvme$subsystem", 00:29:05.327 "trtype": "$TEST_TRANSPORT", 00:29:05.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.327 "adrfam": "ipv4", 00:29:05.327 "trsvcid": "$NVMF_PORT", 00:29:05.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.327 "hdgst": ${hdgst:-false}, 00:29:05.327 "ddgst": ${ddgst:-false} 00:29:05.327 }, 00:29:05.327 "method": "bdev_nvme_attach_controller" 00:29:05.327 } 00:29:05.327 EOF 00:29:05.327 )") 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:05.327 { 00:29:05.327 "params": { 00:29:05.327 "name": "Nvme$subsystem", 00:29:05.327 "trtype": "$TEST_TRANSPORT", 00:29:05.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.327 "adrfam": "ipv4", 00:29:05.327 "trsvcid": "$NVMF_PORT", 00:29:05.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.327 "hdgst": ${hdgst:-false}, 00:29:05.327 "ddgst": ${ddgst:-false} 00:29:05.327 }, 00:29:05.327 "method": "bdev_nvme_attach_controller" 00:29:05.327 } 00:29:05.327 EOF 00:29:05.327 )") 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:05.327 { 00:29:05.327 "params": { 00:29:05.327 "name": "Nvme$subsystem", 00:29:05.327 "trtype": "$TEST_TRANSPORT", 00:29:05.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.327 "adrfam": "ipv4", 00:29:05.327 "trsvcid": "$NVMF_PORT", 00:29:05.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.327 "hdgst": ${hdgst:-false}, 00:29:05.327 "ddgst": ${ddgst:-false} 00:29:05.327 }, 00:29:05.327 "method": "bdev_nvme_attach_controller" 00:29:05.327 } 00:29:05.327 EOF 00:29:05.327 )") 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:05.327 { 00:29:05.327 "params": { 00:29:05.327 "name": "Nvme$subsystem", 00:29:05.327 "trtype": "$TEST_TRANSPORT", 00:29:05.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.327 "adrfam": "ipv4", 00:29:05.327 "trsvcid": "$NVMF_PORT", 00:29:05.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.327 "hdgst": ${hdgst:-false}, 00:29:05.327 "ddgst": ${ddgst:-false} 00:29:05.327 }, 00:29:05.327 "method": "bdev_nvme_attach_controller" 00:29:05.327 } 00:29:05.327 EOF 00:29:05.327 )") 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:05.327 { 00:29:05.327 "params": { 00:29:05.327 "name": "Nvme$subsystem", 00:29:05.327 "trtype": "$TEST_TRANSPORT", 00:29:05.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.327 "adrfam": "ipv4", 00:29:05.327 "trsvcid": "$NVMF_PORT", 00:29:05.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.327 "hdgst": ${hdgst:-false}, 00:29:05.327 "ddgst": ${ddgst:-false} 00:29:05.327 }, 00:29:05.327 "method": "bdev_nvme_attach_controller" 00:29:05.327 } 00:29:05.327 EOF 00:29:05.327 )") 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:05.327 { 00:29:05.327 "params": { 00:29:05.327 "name": "Nvme$subsystem", 00:29:05.327 "trtype": "$TEST_TRANSPORT", 00:29:05.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.327 "adrfam": "ipv4", 00:29:05.327 "trsvcid": "$NVMF_PORT", 00:29:05.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.327 "hdgst": ${hdgst:-false}, 00:29:05.327 "ddgst": ${ddgst:-false} 00:29:05.327 }, 00:29:05.327 "method": "bdev_nvme_attach_controller" 00:29:05.327 } 00:29:05.327 EOF 00:29:05.327 )") 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:29:05.327 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:05.327 "params": { 00:29:05.327 "name": "Nvme1", 00:29:05.327 "trtype": "tcp", 00:29:05.327 "traddr": "10.0.0.2", 00:29:05.327 "adrfam": "ipv4", 00:29:05.327 "trsvcid": "4420", 00:29:05.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.327 "hdgst": false, 00:29:05.327 "ddgst": false 00:29:05.327 }, 00:29:05.327 "method": "bdev_nvme_attach_controller" 00:29:05.327 },{ 00:29:05.327 "params": { 00:29:05.327 "name": "Nvme2", 00:29:05.327 "trtype": "tcp", 00:29:05.327 "traddr": "10.0.0.2", 00:29:05.327 "adrfam": "ipv4", 00:29:05.327 "trsvcid": "4420", 00:29:05.327 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:05.327 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:05.327 "hdgst": false, 00:29:05.327 "ddgst": false 00:29:05.327 }, 00:29:05.327 "method": "bdev_nvme_attach_controller" 00:29:05.327 },{ 00:29:05.327 "params": { 00:29:05.327 "name": "Nvme3", 00:29:05.327 "trtype": "tcp", 00:29:05.327 "traddr": "10.0.0.2", 00:29:05.327 "adrfam": "ipv4", 00:29:05.327 "trsvcid": "4420", 00:29:05.327 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:05.327 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:05.327 "hdgst": false, 00:29:05.327 "ddgst": false 00:29:05.327 }, 00:29:05.327 "method": "bdev_nvme_attach_controller" 00:29:05.327 },{ 00:29:05.327 "params": { 00:29:05.327 "name": "Nvme4", 00:29:05.327 "trtype": "tcp", 00:29:05.327 "traddr": "10.0.0.2", 00:29:05.327 "adrfam": "ipv4", 00:29:05.327 "trsvcid": "4420", 00:29:05.327 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:05.327 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:05.327 "hdgst": false, 00:29:05.327 "ddgst": false 00:29:05.327 }, 00:29:05.327 "method": "bdev_nvme_attach_controller" 00:29:05.327 },{ 00:29:05.327 "params": { 00:29:05.327 "name": "Nvme5", 00:29:05.327 "trtype": "tcp", 00:29:05.327 "traddr": "10.0.0.2", 00:29:05.327 "adrfam": "ipv4", 00:29:05.327 "trsvcid": "4420", 00:29:05.327 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:05.327 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:05.327 "hdgst": false, 00:29:05.327 "ddgst": false 00:29:05.327 }, 00:29:05.327 "method": "bdev_nvme_attach_controller" 00:29:05.327 },{ 00:29:05.327 "params": { 00:29:05.327 "name": "Nvme6", 00:29:05.327 "trtype": "tcp", 00:29:05.328 "traddr": "10.0.0.2", 00:29:05.328 "adrfam": "ipv4", 00:29:05.328 "trsvcid": "4420", 00:29:05.328 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:05.328 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:05.328 "hdgst": false, 00:29:05.328 "ddgst": false 00:29:05.328 }, 00:29:05.328 "method": "bdev_nvme_attach_controller" 00:29:05.328 },{ 00:29:05.328 "params": { 00:29:05.328 "name": "Nvme7", 00:29:05.328 "trtype": "tcp", 00:29:05.328 "traddr": "10.0.0.2", 00:29:05.328 "adrfam": "ipv4", 00:29:05.328 "trsvcid": "4420", 00:29:05.328 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:05.328 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:05.328 "hdgst": false, 00:29:05.328 "ddgst": false 00:29:05.328 }, 00:29:05.328 "method": "bdev_nvme_attach_controller" 00:29:05.328 },{ 00:29:05.328 "params": { 00:29:05.328 "name": "Nvme8", 00:29:05.328 "trtype": "tcp", 00:29:05.328 "traddr": "10.0.0.2", 00:29:05.328 "adrfam": "ipv4", 00:29:05.328 "trsvcid": "4420", 00:29:05.328 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:05.328 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:05.328 "hdgst": false, 00:29:05.328 "ddgst": false 00:29:05.328 }, 00:29:05.328 "method": "bdev_nvme_attach_controller" 00:29:05.328 },{ 00:29:05.328 "params": { 00:29:05.328 "name": "Nvme9", 00:29:05.328 "trtype": "tcp", 00:29:05.328 "traddr": "10.0.0.2", 00:29:05.328 "adrfam": "ipv4", 00:29:05.328 "trsvcid": "4420", 00:29:05.328 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:05.328 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:05.328 "hdgst": false, 00:29:05.328 "ddgst": false 00:29:05.328 }, 00:29:05.328 "method": "bdev_nvme_attach_controller" 00:29:05.328 },{ 00:29:05.328 "params": { 00:29:05.328 "name": "Nvme10", 00:29:05.328 "trtype": "tcp", 00:29:05.328 "traddr": "10.0.0.2", 00:29:05.328 "adrfam": "ipv4", 00:29:05.328 "trsvcid": "4420", 00:29:05.328 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:05.328 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:05.328 "hdgst": false, 00:29:05.328 "ddgst": false 00:29:05.328 }, 00:29:05.328 "method": "bdev_nvme_attach_controller" 00:29:05.328 }' 00:29:05.328 [2024-10-11 22:52:08.444031] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:29:05.328 [2024-10-11 22:52:08.444124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326357 ] 00:29:05.328 [2024-10-11 22:52:08.508462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.328 [2024-10-11 22:52:08.555454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.228 Running I/O for 10 seconds... 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.228 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.486 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=10 00:29:07.486 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 10 -ge 100 ']' 00:29:07.486 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:07.745 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:07.745 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:07.745 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:07.745 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:07.745 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.745 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.745 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.745 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=76 00:29:07.745 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 76 -ge 100 ']' 00:29:07.745 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 326176 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 326176 ']' 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 326176 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 326176 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 326176' 00:29:08.018 killing process with pid 326176 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 326176 00:29:08.018 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 326176 00:29:08.018 [2024-10-11 22:52:11.150375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.018 [2024-10-11 22:52:11.150694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.150993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.151333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237900 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.152905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.152939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.152955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.152967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.152979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.152992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.019 [2024-10-11 22:52:11.153353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.153724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab220 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.154988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.020 [2024-10-11 22:52:11.155670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.155681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.155694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.155706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.155718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.155730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.155742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.155754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.155766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.155779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.155790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.155802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2237dd0 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.159892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2238c60 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.161197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.161223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.161237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.161250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.161262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.161274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.021 [2024-10-11 22:52:11.161286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.161995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.162007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.162020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.162031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.162043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.162055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.162066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239150 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.022 [2024-10-11 22:52:11.163934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.163946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.163958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.163969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.163981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.163993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.164395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239620 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.023 [2024-10-11 22:52:11.165942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.165954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.165966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.165977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.165993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.166332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399a0 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.024 [2024-10-11 22:52:11.167764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.167953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aad50 is same with the state(6) to be set 00:29:08.025 [2024-10-11 22:52:11.169990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.170973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.170989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.171003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.171028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.171041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.171057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.171082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.171098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.025 [2024-10-11 22:52:11.171112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.025 [2024-10-11 22:52:11.171127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.171976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.171991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.172007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.172021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.172037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.172051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.172066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.172080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.172097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.172111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.172126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.026 [2024-10-11 22:52:11.172140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.172186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 [2024-10-11 22:52:11.172269] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x234c120 was disconnected and freed. reset controller. 00:29:08.026 [2024-10-11 22:52:11.172795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.026 [2024-10-11 22:52:11.172820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.172836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.026 [2024-10-11 22:52:11.172857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.172872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.026 [2024-10-11 22:52:11.172886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.172900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.026 [2024-10-11 22:52:11.172914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.172927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4b650 is same with the state(6) to be set 00:29:08.026 [2024-10-11 22:52:11.172988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.026 [2024-10-11 22:52:11.173009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.026 [2024-10-11 22:52:11.173036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.026 [2024-10-11 22:52:11.173052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57610 is same with the state(6) to be set 00:29:08.027 [2024-10-11 22:52:11.173174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236f300 is same with the state(6) to be set 00:29:08.027 [2024-10-11 22:52:11.173349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b8c70 is same with the state(6) to be set 00:29:08.027 [2024-10-11 22:52:11.173537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9000 is same with the state(6) to be set 00:29:08.027 [2024-10-11 22:52:11.173725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2375910 is same with the state(6) to be set 00:29:08.027 [2024-10-11 22:52:11.173885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.173986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.173999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.174017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dee0 is same with the state(6) to be set 00:29:08.027 [2024-10-11 22:52:11.174063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.174084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.174099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.174112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.174126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.174145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.174160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.174173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.174186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dba0 is same with the state(6) to be set 00:29:08.027 [2024-10-11 22:52:11.174234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.174264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.174283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.174305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.174326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.174341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.174363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.027 [2024-10-11 22:52:11.174376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.027 [2024-10-11 22:52:11.174389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f49450 is same with the state(6) to be set 00:29:08.027 [2024-10-11 22:52:11.174436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.028 [2024-10-11 22:52:11.174456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.174471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.028 [2024-10-11 22:52:11.174484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.174498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.028 [2024-10-11 22:52:11.174511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.174530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.028 [2024-10-11 22:52:11.174544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.174567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bab0 is same with the state(6) to be set 00:29:08.028 [2024-10-11 22:52:11.174794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.174817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.174838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.174854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.174877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.174892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.174908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.174923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.174941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.174955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.174972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.174987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.028 [2024-10-11 22:52:11.175913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.028 [2024-10-11 22:52:11.175927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.175943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.175957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.175977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.175993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.029 [2024-10-11 22:52:11.176870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.029 [2024-10-11 22:52:11.176959] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x233ed10 was disconnected and freed. reset controller. 00:29:08.029 [2024-10-11 22:52:11.178618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:08.029 [2024-10-11 22:52:11.178663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2375910 (9): Bad file descriptor 00:29:08.029 [2024-10-11 22:52:11.180365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:08.029 [2024-10-11 22:52:11.180403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f49450 (9): Bad file descriptor 00:29:08.029 [2024-10-11 22:52:11.181388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.029 [2024-10-11 22:52:11.181421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2375910 with addr=10.0.0.2, port=4420 00:29:08.029 [2024-10-11 22:52:11.181440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2375910 is same with the state(6) to be set 00:29:08.029 [2024-10-11 22:52:11.182480] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:08.029 [2024-10-11 22:52:11.182596] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:08.029 [2024-10-11 22:52:11.182680] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:08.029 [2024-10-11 22:52:11.182761] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:08.029 [2024-10-11 22:52:11.182837] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:08.029 [2024-10-11 22:52:11.182914] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:08.029 [2024-10-11 22:52:11.183014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.029 [2024-10-11 22:52:11.183041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f49450 with addr=10.0.0.2, port=4420 00:29:08.029 [2024-10-11 22:52:11.183057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f49450 is same with the state(6) to be set 00:29:08.029 [2024-10-11 22:52:11.183078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2375910 (9): Bad file descriptor 00:29:08.029 [2024-10-11 22:52:11.183121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4b650 (9): Bad file descriptor 00:29:08.029 [2024-10-11 22:52:11.183157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e57610 (9): Bad file descriptor 00:29:08.029 [2024-10-11 22:52:11.183190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236f300 (9): Bad file descriptor 00:29:08.029 [2024-10-11 22:52:11.183225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b8c70 (9): Bad file descriptor 00:29:08.030 [2024-10-11 22:52:11.183260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9000 (9): Bad file descriptor 00:29:08.030 [2024-10-11 22:52:11.183295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236dee0 (9): Bad file descriptor 00:29:08.030 [2024-10-11 22:52:11.183345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236dba0 (9): Bad file descriptor 00:29:08.030 [2024-10-11 22:52:11.183378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4bab0 (9): Bad file descriptor 00:29:08.030 [2024-10-11 22:52:11.183459] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:08.030 [2024-10-11 22:52:11.183711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.183739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.183764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.183781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.183799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.183815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.183831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.183851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.183867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.183882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.183910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.183924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.183941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.183956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.183971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21511b0 is same with the state(6) to be set 00:29:08.030 [2024-10-11 22:52:11.184061] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21511b0 was disconnected and freed. reset controller. 00:29:08.030 [2024-10-11 22:52:11.184247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f49450 (9): Bad file descriptor 00:29:08.030 [2024-10-11 22:52:11.184274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:08.030 [2024-10-11 22:52:11.184289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:08.030 [2024-10-11 22:52:11.184305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:08.030 [2024-10-11 22:52:11.185279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.030 [2024-10-11 22:52:11.185306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:08.030 [2024-10-11 22:52:11.185336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:08.030 [2024-10-11 22:52:11.185354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:08.030 [2024-10-11 22:52:11.185376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:08.030 [2024-10-11 22:52:11.185450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.030 [2024-10-11 22:52:11.185544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.030 [2024-10-11 22:52:11.185579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f4b650 with addr=10.0.0.2, port=4420 00:29:08.030 [2024-10-11 22:52:11.185607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4b650 is same with the state(6) to be set 00:29:08.030 [2024-10-11 22:52:11.185934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4b650 (9): Bad file descriptor 00:29:08.030 [2024-10-11 22:52:11.186005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:08.030 [2024-10-11 22:52:11.186026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:08.030 [2024-10-11 22:52:11.186041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:08.030 [2024-10-11 22:52:11.186105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.030 [2024-10-11 22:52:11.190578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:08.030 [2024-10-11 22:52:11.190763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.030 [2024-10-11 22:52:11.190793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2375910 with addr=10.0.0.2, port=4420 00:29:08.030 [2024-10-11 22:52:11.190811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2375910 is same with the state(6) to be set 00:29:08.030 [2024-10-11 22:52:11.190869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2375910 (9): Bad file descriptor 00:29:08.030 [2024-10-11 22:52:11.190928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:08.030 [2024-10-11 22:52:11.190946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:08.030 [2024-10-11 22:52:11.190960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:08.030 [2024-10-11 22:52:11.191016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.030 [2024-10-11 22:52:11.191605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:08.030 [2024-10-11 22:52:11.191775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.030 [2024-10-11 22:52:11.191804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f49450 with addr=10.0.0.2, port=4420 00:29:08.030 [2024-10-11 22:52:11.191820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f49450 is same with the state(6) to be set 00:29:08.030 [2024-10-11 22:52:11.191878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f49450 (9): Bad file descriptor 00:29:08.030 [2024-10-11 22:52:11.191935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:08.030 [2024-10-11 22:52:11.191952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:08.030 [2024-10-11 22:52:11.191966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:08.030 [2024-10-11 22:52:11.192022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.030 [2024-10-11 22:52:11.193154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.193187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.193226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.193243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.193262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.193277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.193293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.193308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.193324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.193339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.193356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.193371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.193388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.193402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.193419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.193434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.193451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.193466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.193483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.193497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.193514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.193528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.193561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.030 [2024-10-11 22:52:11.193579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.030 [2024-10-11 22:52:11.193596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.193611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.193627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.193647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.193664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.193679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.193696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.193711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.193727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.193742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.193759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.193773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.193790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.193805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.193821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.193836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.193853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.193868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.193884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.193899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.193916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.193942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.193959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.193974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.193999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.031 [2024-10-11 22:52:11.194871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.031 [2024-10-11 22:52:11.194891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.194907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.194923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.194938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.194955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.194969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.194985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.195000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.195017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.195031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.195048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.195063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.195080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.195094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.195110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.195125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.195142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.195157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.195173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.195188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.195205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.195219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.195236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.195250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.195267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.195286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.195302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fe30 is same with the state(6) to be set 00:29:08.032 [2024-10-11 22:52:11.196588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.196621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.196641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.196658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.196676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.196691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.196708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.196723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.196740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.196755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.196771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.196785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.196802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.196816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.196833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.196848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.196864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.196878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.196894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.196914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.196932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.196948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.196964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.196983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.197002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.197018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.197035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.197050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.197066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.197081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.197099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.197114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.197130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.197144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.197161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.197176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.197193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.197207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.197223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.197237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.032 [2024-10-11 22:52:11.197254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.032 [2024-10-11 22:52:11.197268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.197979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.197995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.033 [2024-10-11 22:52:11.198494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.033 [2024-10-11 22:52:11.198509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.198526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.198541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.198567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.198583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.198614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.198630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.198647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.198661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.198676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234d650 is same with the state(6) to be set 00:29:08.034 [2024-10-11 22:52:11.199921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.199944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.199965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.199981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.199998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.200981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.200996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.034 [2024-10-11 22:52:11.201012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.034 [2024-10-11 22:52:11.201027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.201980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.201996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.202011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.202026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234eb80 is same with the state(6) to be set 00:29:08.035 [2024-10-11 22:52:11.203297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.203321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.203343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.203359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.203377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.203392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.203408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.203423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.203439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.203454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.203471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.203486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.203503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.203517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.203534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.035 [2024-10-11 22:52:11.203561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.035 [2024-10-11 22:52:11.203586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.203608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.203625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.203640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.203657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.203672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.203689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.203705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.203722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.203736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.203753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.203767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.203784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.203799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.203815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.203830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.203851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.203866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.203883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.203898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.203924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.203939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.203956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.203971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.203987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.036 [2024-10-11 22:52:11.204686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.036 [2024-10-11 22:52:11.204702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.204717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.204734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.204748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.204764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.204779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.204795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.204814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.204832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.204856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.204872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.204887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.204904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.204918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.204935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.204950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.204966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.204981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.204997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.205392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.205407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23500b0 is same with the state(6) to be set 00:29:08.037 [2024-10-11 22:52:11.206661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.206684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.206706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.206722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.206739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.206754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.206772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.206788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.206804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.206819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.206847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.206861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.206883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.206906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.206922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.206947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.206965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.206980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.206997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.207011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.207027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.207042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.207058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.207073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.207089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.207104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.207120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.207135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.207151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.207166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.207182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.037 [2024-10-11 22:52:11.207196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.037 [2024-10-11 22:52:11.207213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.207980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.207994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.038 [2024-10-11 22:52:11.208439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.038 [2024-10-11 22:52:11.208456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.208470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.208486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.208501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.208519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.208537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.208561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.208578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.208604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.208619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.208635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.208649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.208666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.208681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.208698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.208722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.208739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.208754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.208771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.208786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.208801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23515e0 is same with the state(6) to be set 00:29:08.039 [2024-10-11 22:52:11.210042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.039 [2024-10-11 22:52:11.210972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.039 [2024-10-11 22:52:11.210986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.211972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.211986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.212002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.212017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.212033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.212047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.212063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.212077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.212093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.212107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.212122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352a20 is same with the state(6) to be set 00:29:08.040 [2024-10-11 22:52:11.213382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.213405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.213426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.213442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.213459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.040 [2024-10-11 22:52:11.213479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.040 [2024-10-11 22:52:11.213496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.213983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.213997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.041 [2024-10-11 22:52:11.214565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.041 [2024-10-11 22:52:11.214582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.214603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.214620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.214634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.214651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.214665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.214686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.214702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.214719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.214733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.214750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.214765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.214781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.214796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.214812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.214826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.214853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.214867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.214885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.214908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.214924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.214938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.214955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.214970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.214986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.042 [2024-10-11 22:52:11.215439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.042 [2024-10-11 22:52:11.215454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2b60 is same with the state(6) to be set 00:29:08.042 [2024-10-11 22:52:11.217470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.042 [2024-10-11 22:52:11.217504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:08.042 [2024-10-11 22:52:11.217529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:08.042 [2024-10-11 22:52:11.217547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:08.042 [2024-10-11 22:52:11.217698] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.042 [2024-10-11 22:52:11.217726] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.042 [2024-10-11 22:52:11.217750] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.042 [2024-10-11 22:52:11.217864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:08.042 [2024-10-11 22:52:11.217888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:08.042 task offset: 28288 on job bdev=Nvme4n1 fails 00:29:08.042 00:29:08.042 Latency(us) 00:29:08.042 [2024-10-11T20:52:11.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.042 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.042 Job: Nvme1n1 ended in about 0.98 seconds with error 00:29:08.042 Verification LBA range: start 0x0 length 0x400 00:29:08.042 Nvme1n1 : 0.98 196.56 12.28 65.52 0.00 241562.36 18641.35 256318.58 00:29:08.042 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.042 Job: Nvme2n1 ended in about 0.97 seconds with error 00:29:08.042 Verification LBA range: start 0x0 length 0x400 00:29:08.042 Nvme2n1 : 0.97 196.78 12.30 7.25 0.00 303655.80 27767.85 273406.48 00:29:08.042 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.042 Job: Nvme3n1 ended in about 0.96 seconds with error 00:29:08.042 Verification LBA range: start 0x0 length 0x400 00:29:08.042 Nvme3n1 : 0.96 199.94 12.50 66.65 0.00 228189.68 9417.77 250104.79 00:29:08.042 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.042 Job: Nvme4n1 ended in about 0.96 seconds with error 00:29:08.042 Verification LBA range: start 0x0 length 0x400 00:29:08.042 Nvme4n1 : 0.96 200.32 12.52 66.77 0.00 223077.36 7330.32 265639.25 00:29:08.042 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.042 Job: Nvme5n1 ended in about 0.98 seconds with error 00:29:08.042 Verification LBA range: start 0x0 length 0x400 00:29:08.042 Nvme5n1 : 0.98 130.59 8.16 65.30 0.00 298503.08 21845.33 246997.90 00:29:08.042 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.042 Job: Nvme6n1 ended in about 0.98 seconds with error 00:29:08.043 Verification LBA range: start 0x0 length 0x400 00:29:08.043 Nvme6n1 : 0.98 195.22 12.20 65.07 0.00 220029.53 19515.16 265639.25 00:29:08.043 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.043 Job: Nvme7n1 ended in about 0.99 seconds with error 00:29:08.043 Verification LBA range: start 0x0 length 0x400 00:29:08.043 Nvme7n1 : 0.99 129.70 8.11 64.85 0.00 288459.28 20388.98 270299.59 00:29:08.043 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.043 Job: Nvme8n1 ended in about 0.99 seconds with error 00:29:08.043 Verification LBA range: start 0x0 length 0x400 00:29:08.043 Nvme8n1 : 0.99 202.98 12.69 64.63 0.00 205193.35 33010.73 231463.44 00:29:08.043 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.043 Job: Nvme9n1 ended in about 0.99 seconds with error 00:29:08.043 Verification LBA range: start 0x0 length 0x400 00:29:08.043 Nvme9n1 : 0.99 128.83 8.05 64.42 0.00 278332.81 37865.24 271853.04 00:29:08.043 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.043 Job: Nvme10n1 ended in about 1.00 seconds with error 00:29:08.043 Verification LBA range: start 0x0 length 0x400 00:29:08.043 Nvme10n1 : 1.00 128.40 8.02 64.20 0.00 273599.40 21845.33 288940.94 00:29:08.043 [2024-10-11T20:52:11.311Z] =================================================================================================================== 00:29:08.043 [2024-10-11T20:52:11.311Z] Total : 1709.33 106.83 594.66 0.00 251355.84 7330.32 288940.94 00:29:08.043 [2024-10-11 22:52:11.248441] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:08.043 [2024-10-11 22:52:11.248533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:08.043 [2024-10-11 22:52:11.248837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.043 [2024-10-11 22:52:11.248874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f4bab0 with addr=10.0.0.2, port=4420 00:29:08.043 [2024-10-11 22:52:11.248894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bab0 is same with the state(6) to be set 00:29:08.043 [2024-10-11 22:52:11.248992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.043 [2024-10-11 22:52:11.249017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x236dba0 with addr=10.0.0.2, port=4420 00:29:08.043 [2024-10-11 22:52:11.249033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dba0 is same with the state(6) to be set 00:29:08.043 [2024-10-11 22:52:11.249122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.043 [2024-10-11 22:52:11.249146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x236dee0 with addr=10.0.0.2, port=4420 00:29:08.043 [2024-10-11 22:52:11.249162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dee0 is same with the state(6) to be set 00:29:08.043 [2024-10-11 22:52:11.249244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.043 [2024-10-11 22:52:11.249268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e57610 with addr=10.0.0.2, port=4420 00:29:08.043 [2024-10-11 22:52:11.249284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57610 is same with the state(6) to be set 00:29:08.043 [2024-10-11 22:52:11.251236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:08.043 [2024-10-11 22:52:11.251268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:08.043 [2024-10-11 22:52:11.251450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.043 [2024-10-11 22:52:11.251477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x236f300 with addr=10.0.0.2, port=4420 00:29:08.043 [2024-10-11 22:52:11.251494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236f300 is same with the state(6) to be set 00:29:08.043 [2024-10-11 22:52:11.251587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.043 [2024-10-11 22:52:11.251613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9000 with addr=10.0.0.2, port=4420 00:29:08.043 [2024-10-11 22:52:11.251629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9000 is same with the state(6) to be set 00:29:08.043 [2024-10-11 22:52:11.251707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.043 [2024-10-11 22:52:11.251731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b8c70 with addr=10.0.0.2, port=4420 00:29:08.043 [2024-10-11 22:52:11.251747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b8c70 is same with the state(6) to be set 00:29:08.043 [2024-10-11 22:52:11.251773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4bab0 (9): Bad file descriptor 00:29:08.043 [2024-10-11 22:52:11.251796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236dba0 (9): Bad file descriptor 00:29:08.043 [2024-10-11 22:52:11.251814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236dee0 (9): Bad file descriptor 00:29:08.043 [2024-10-11 22:52:11.251843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e57610 (9): Bad file descriptor 00:29:08.043 [2024-10-11 22:52:11.251898] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.043 [2024-10-11 22:52:11.251928] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.043 [2024-10-11 22:52:11.251948] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.043 [2024-10-11 22:52:11.251969] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.043 [2024-10-11 22:52:11.251991] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.043 [2024-10-11 22:52:11.252311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:08.043 [2024-10-11 22:52:11.252468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.043 [2024-10-11 22:52:11.252499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f4b650 with addr=10.0.0.2, port=4420 00:29:08.043 [2024-10-11 22:52:11.252515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4b650 is same with the state(6) to be set 00:29:08.043 [2024-10-11 22:52:11.252635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.043 [2024-10-11 22:52:11.252660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2375910 with addr=10.0.0.2, port=4420 00:29:08.043 [2024-10-11 22:52:11.252676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2375910 is same with the state(6) to be set 00:29:08.043 [2024-10-11 22:52:11.252694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236f300 (9): Bad file descriptor 00:29:08.043 [2024-10-11 22:52:11.252713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9000 (9): Bad file descriptor 00:29:08.043 [2024-10-11 22:52:11.252731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b8c70 (9): Bad file descriptor 00:29:08.043 [2024-10-11 22:52:11.252748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.043 [2024-10-11 22:52:11.252761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.043 [2024-10-11 22:52:11.252776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.043 [2024-10-11 22:52:11.252796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:08.043 [2024-10-11 22:52:11.252810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:08.043 [2024-10-11 22:52:11.252823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:08.043 [2024-10-11 22:52:11.252840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:08.043 [2024-10-11 22:52:11.252853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:08.043 [2024-10-11 22:52:11.252866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:08.043 [2024-10-11 22:52:11.252882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:08.043 [2024-10-11 22:52:11.252895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:08.043 [2024-10-11 22:52:11.252908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:08.043 [2024-10-11 22:52:11.253004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.043 [2024-10-11 22:52:11.253025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.043 [2024-10-11 22:52:11.253043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.043 [2024-10-11 22:52:11.253056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.043 [2024-10-11 22:52:11.253139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.043 [2024-10-11 22:52:11.253165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f49450 with addr=10.0.0.2, port=4420 00:29:08.043 [2024-10-11 22:52:11.253181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f49450 is same with the state(6) to be set 00:29:08.043 [2024-10-11 22:52:11.253199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4b650 (9): Bad file descriptor 00:29:08.043 [2024-10-11 22:52:11.253218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2375910 (9): Bad file descriptor 00:29:08.043 [2024-10-11 22:52:11.253234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:08.043 [2024-10-11 22:52:11.253248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:08.043 [2024-10-11 22:52:11.253261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:08.043 [2024-10-11 22:52:11.253278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:08.043 [2024-10-11 22:52:11.253292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:08.043 [2024-10-11 22:52:11.253306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:08.043 [2024-10-11 22:52:11.253321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:08.043 [2024-10-11 22:52:11.253335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:08.043 [2024-10-11 22:52:11.253348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:08.044 [2024-10-11 22:52:11.253385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.044 [2024-10-11 22:52:11.253403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.044 [2024-10-11 22:52:11.253416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.044 [2024-10-11 22:52:11.253432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f49450 (9): Bad file descriptor 00:29:08.044 [2024-10-11 22:52:11.253448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:08.044 [2024-10-11 22:52:11.253462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:08.044 [2024-10-11 22:52:11.253475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:08.044 [2024-10-11 22:52:11.253491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:08.044 [2024-10-11 22:52:11.253505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:08.044 [2024-10-11 22:52:11.253518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:08.044 [2024-10-11 22:52:11.253575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.044 [2024-10-11 22:52:11.253594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.044 [2024-10-11 22:52:11.253607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:08.044 [2024-10-11 22:52:11.253619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:08.044 [2024-10-11 22:52:11.253632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:08.044 [2024-10-11 22:52:11.253675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.611 22:52:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 326357 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 326357 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 326357 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:09.549 rmmod nvme_tcp 00:29:09.549 rmmod nvme_fabrics 00:29:09.549 rmmod nvme_keyring 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 326176 ']' 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 326176 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 326176 ']' 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 326176 00:29:09.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (326176) - No such process 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 326176 is not found' 00:29:09.549 Process with pid 326176 is not found 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.549 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:12.086 00:29:12.086 real 0m7.478s 00:29:12.086 user 0m18.440s 00:29:12.086 sys 0m1.441s 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.086 ************************************ 00:29:12.086 END TEST nvmf_shutdown_tc3 00:29:12.086 ************************************ 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:12.086 ************************************ 00:29:12.086 START TEST nvmf_shutdown_tc4 00:29:12.086 ************************************ 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.086 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:12.087 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:12.087 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:12.087 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:12.087 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:12.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:29:12.087 00:29:12.087 --- 10.0.0.2 ping statistics --- 00:29:12.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.087 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:29:12.087 00:29:12.087 --- 10.0.0.1 ping statistics --- 00:29:12.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.087 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:12.087 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:12.087 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:12.087 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:12.087 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:12.087 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:12.087 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=327261 00:29:12.087 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:12.087 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 327261 00:29:12.087 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 327261 ']' 00:29:12.087 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.087 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:12.088 [2024-10-11 22:52:15.071078] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:29:12.088 [2024-10-11 22:52:15.071185] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.088 [2024-10-11 22:52:15.135186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:12.088 [2024-10-11 22:52:15.179690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.088 [2024-10-11 22:52:15.179748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.088 [2024-10-11 22:52:15.179772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.088 [2024-10-11 22:52:15.179782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.088 [2024-10-11 22:52:15.179792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.088 [2024-10-11 22:52:15.181213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:12.088 [2024-10-11 22:52:15.181274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:12.088 [2024-10-11 22:52:15.181341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:12.088 [2024-10-11 22:52:15.181344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:12.088 [2024-10-11 22:52:15.322673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.088 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:12.346 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.346 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:12.346 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.346 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:12.346 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.346 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:12.346 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.346 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:12.346 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:12.346 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.346 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:12.346 Malloc1 00:29:12.346 [2024-10-11 22:52:15.424580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.346 Malloc2 00:29:12.346 Malloc3 00:29:12.346 Malloc4 00:29:12.346 Malloc5 00:29:12.604 Malloc6 00:29:12.604 Malloc7 00:29:12.604 Malloc8 00:29:12.604 Malloc9 00:29:12.604 Malloc10 00:29:12.604 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.604 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:12.604 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:12.604 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:12.861 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=327438 00:29:12.861 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:12.861 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:12.861 [2024-10-11 22:52:15.932235] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:18.132 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:18.132 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 327261 00:29:18.132 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 327261 ']' 00:29:18.132 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 327261 00:29:18.132 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:29:18.132 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:18.133 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 327261 00:29:18.133 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:18.133 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:18.133 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 327261' 00:29:18.133 killing process with pid 327261 00:29:18.133 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 327261 00:29:18.133 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 327261 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 [2024-10-11 22:52:20.937835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231eb50 is same with the state(6) to be set 00:29:18.133 [2024-10-11 22:52:20.937860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.133 [2024-10-11 22:52:20.937923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231eb50 is same with the state(6) to be set 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 [2024-10-11 22:52:20.939128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.133 starting I/O failed: -6 00:29:18.133 starting I/O failed: -6 00:29:18.133 starting I/O failed: -6 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 [2024-10-11 22:52:20.940493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.133 starting I/O failed: -6 00:29:18.133 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 [2024-10-11 22:52:20.941073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2591a40 is same with Write completed with error (sct=0, sc=8) 00:29:18.134 the state(6) to be set 00:29:18.134 starting I/O failed: -6 00:29:18.134 [2024-10-11 22:52:20.941106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2591a40 is same with Write completed with error (sct=0, sc=8) 00:29:18.134 the state(6) to be set 00:29:18.134 starting I/O failed: -6 00:29:18.134 [2024-10-11 22:52:20.941122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2591a40 is same with the state(6) to be set 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 [2024-10-11 22:52:20.941135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2591a40 is same with the state(6) to be set 00:29:18.134 starting I/O failed: -6 00:29:18.134 [2024-10-11 22:52:20.941147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2591a40 is same with the state(6) to be set 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 [2024-10-11 22:52:20.942091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.134 NVMe io qpair process completion error 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 [2024-10-11 22:52:20.943360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.134 starting I/O failed: -6 00:29:18.134 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 [2024-10-11 22:52:20.944394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 [2024-10-11 22:52:20.945536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 [2024-10-11 22:52:20.947450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.135 NVMe io qpair process completion error 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 starting I/O failed: -6 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.135 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 [2024-10-11 22:52:20.948629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.136 starting I/O failed: -6 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 [2024-10-11 22:52:20.949727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 [2024-10-11 22:52:20.950908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.136 starting I/O failed: -6 00:29:18.136 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 [2024-10-11 22:52:20.953155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.137 NVMe io qpair process completion error 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 [2024-10-11 22:52:20.954392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.137 Write completed with error (sct=0, sc=8) 00:29:18.137 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 [2024-10-11 22:52:20.955509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 [2024-10-11 22:52:20.956661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 [2024-10-11 22:52:20.958573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.138 NVMe io qpair process completion error 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.138 starting I/O failed: -6 00:29:18.138 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 [2024-10-11 22:52:20.959831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 [2024-10-11 22:52:20.960942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 [2024-10-11 22:52:20.962082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.139 Write completed with error (sct=0, sc=8) 00:29:18.139 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 [2024-10-11 22:52:20.964233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.140 NVMe io qpair process completion error 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 [2024-10-11 22:52:20.965467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 [2024-10-11 22:52:20.966454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 Write completed with error (sct=0, sc=8) 00:29:18.140 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 [2024-10-11 22:52:20.967659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 [2024-10-11 22:52:20.969919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.141 NVMe io qpair process completion error 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 [2024-10-11 22:52:20.971187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.141 Write completed with error (sct=0, sc=8) 00:29:18.141 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 [2024-10-11 22:52:20.972171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.142 starting I/O failed: -6 00:29:18.142 starting I/O failed: -6 00:29:18.142 starting I/O failed: -6 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 [2024-10-11 22:52:20.973558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.142 starting I/O failed: -6 00:29:18.142 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 [2024-10-11 22:52:20.977381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.143 NVMe io qpair process completion error 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 [2024-10-11 22:52:20.978722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 [2024-10-11 22:52:20.979686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 [2024-10-11 22:52:20.980851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.143 Write completed with error (sct=0, sc=8) 00:29:18.143 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 [2024-10-11 22:52:20.984583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.144 NVMe io qpair process completion error 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 [2024-10-11 22:52:20.985977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.144 starting I/O failed: -6 00:29:18.144 Write completed with error (sct=0, sc=8) 00:29:18.145 [2024-10-11 22:52:20.986923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 [2024-10-11 22:52:20.988069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 [2024-10-11 22:52:20.990540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.145 NVMe io qpair process completion error 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 starting I/O failed: -6 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.145 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 [2024-10-11 22:52:20.991925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 [2024-10-11 22:52:20.992966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 [2024-10-11 22:52:20.994075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.146 starting I/O failed: -6 00:29:18.146 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 Write completed with error (sct=0, sc=8) 00:29:18.147 starting I/O failed: -6 00:29:18.147 [2024-10-11 22:52:20.996239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.147 NVMe io qpair process completion error 00:29:18.147 Initializing NVMe Controllers 00:29:18.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:18.147 Controller IO queue size 128, less than required. 00:29:18.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.147 Controller IO queue size 128, less than required. 00:29:18.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:18.147 Controller IO queue size 128, less than required. 00:29:18.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:18.147 Controller IO queue size 128, less than required. 00:29:18.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:18.147 Controller IO queue size 128, less than required. 00:29:18.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:18.147 Controller IO queue size 128, less than required. 00:29:18.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:18.147 Controller IO queue size 128, less than required. 00:29:18.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:18.147 Controller IO queue size 128, less than required. 00:29:18.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:18.147 Controller IO queue size 128, less than required. 00:29:18.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:18.147 Controller IO queue size 128, less than required. 00:29:18.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:18.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:18.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:18.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:18.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:18.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:18.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:18.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:18.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:18.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:18.147 Initialization complete. Launching workers. 00:29:18.147 ======================================================== 00:29:18.147 Latency(us) 00:29:18.147 Device Information : IOPS MiB/s Average min max 00:29:18.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1802.53 77.45 71034.10 1096.56 143500.30 00:29:18.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1783.36 76.63 70913.85 1000.40 120931.37 00:29:18.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1798.01 77.26 70361.36 1032.25 120561.42 00:29:18.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1776.26 76.32 71248.70 808.00 121329.53 00:29:18.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1791.76 76.99 70665.05 1032.48 123745.40 00:29:18.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1736.43 74.61 72947.51 1109.14 126486.15 00:29:18.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1759.90 75.62 72004.19 912.93 129190.47 00:29:18.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1838.48 79.00 68962.79 916.97 132032.25 00:29:18.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1849.68 79.48 68599.90 824.84 118111.17 00:29:18.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1829.65 78.62 69405.17 980.14 140594.25 00:29:18.147 ======================================================== 00:29:18.147 Total : 17966.06 771.98 70590.75 808.00 143500.30 00:29:18.147 00:29:18.147 [2024-10-11 22:52:21.001543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a14200 is same with the state(6) to be set 00:29:18.147 [2024-10-11 22:52:21.001638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a18160 is same with the state(6) to be set 00:29:18.147 [2024-10-11 22:52:21.001698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a11b20 is same with the state(6) to be set 00:29:18.147 [2024-10-11 22:52:21.001762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a123a0 is same with the state(6) to be set 00:29:18.147 [2024-10-11 22:52:21.001834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12a00 is same with the state(6) to be set 00:29:18.147 [2024-10-11 22:52:21.001891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a14530 is same with the state(6) to be set 00:29:18.147 [2024-10-11 22:52:21.001946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a126d0 is same with the state(6) to be set 00:29:18.147 [2024-10-11 22:52:21.002002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a13ed0 is same with the state(6) to be set 00:29:18.147 [2024-10-11 22:52:21.002059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a121c0 is same with the state(6) to be set 00:29:18.147 [2024-10-11 22:52:21.002114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a14860 is same with the state(6) to be set 00:29:18.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:18.406 22:52:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 327438 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 327438 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 327438 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:19.345 rmmod nvme_tcp 00:29:19.345 rmmod nvme_fabrics 00:29:19.345 rmmod nvme_keyring 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 327261 ']' 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 327261 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 327261 ']' 00:29:19.345 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 327261 00:29:19.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (327261) - No such process 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 327261 is not found' 00:29:19.346 Process with pid 327261 is not found 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.346 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.885 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.885 00:29:21.885 real 0m9.685s 00:29:21.885 user 0m24.402s 00:29:21.885 sys 0m5.248s 00:29:21.885 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:21.885 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:21.885 ************************************ 00:29:21.885 END TEST nvmf_shutdown_tc4 00:29:21.885 ************************************ 00:29:21.885 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:21.885 00:29:21.885 real 0m36.726s 00:29:21.885 user 1m38.980s 00:29:21.885 sys 0m11.593s 00:29:21.885 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:21.885 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:21.885 ************************************ 00:29:21.885 END TEST nvmf_shutdown 00:29:21.885 ************************************ 00:29:21.885 22:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:21.885 00:29:21.885 real 17m56.268s 00:29:21.885 user 50m14.558s 00:29:21.885 sys 3m51.245s 00:29:21.885 22:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:21.885 22:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:21.885 ************************************ 00:29:21.885 END TEST nvmf_target_extra 00:29:21.885 ************************************ 00:29:21.885 22:52:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:21.885 22:52:24 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:21.885 22:52:24 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:21.885 22:52:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.885 ************************************ 00:29:21.885 START TEST nvmf_host 00:29:21.885 ************************************ 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:21.885 * Looking for test storage... 00:29:21.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:21.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.885 --rc genhtml_branch_coverage=1 00:29:21.885 --rc genhtml_function_coverage=1 00:29:21.885 --rc genhtml_legend=1 00:29:21.885 --rc geninfo_all_blocks=1 00:29:21.885 --rc geninfo_unexecuted_blocks=1 00:29:21.885 00:29:21.885 ' 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:21.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.885 --rc genhtml_branch_coverage=1 00:29:21.885 --rc genhtml_function_coverage=1 00:29:21.885 --rc genhtml_legend=1 00:29:21.885 --rc geninfo_all_blocks=1 00:29:21.885 --rc geninfo_unexecuted_blocks=1 00:29:21.885 00:29:21.885 ' 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:21.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.885 --rc genhtml_branch_coverage=1 00:29:21.885 --rc genhtml_function_coverage=1 00:29:21.885 --rc genhtml_legend=1 00:29:21.885 --rc geninfo_all_blocks=1 00:29:21.885 --rc geninfo_unexecuted_blocks=1 00:29:21.885 00:29:21.885 ' 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:21.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.885 --rc genhtml_branch_coverage=1 00:29:21.885 --rc genhtml_function_coverage=1 00:29:21.885 --rc genhtml_legend=1 00:29:21.885 --rc geninfo_all_blocks=1 00:29:21.885 --rc geninfo_unexecuted_blocks=1 00:29:21.885 00:29:21.885 ' 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.885 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:21.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.886 ************************************ 00:29:21.886 START TEST nvmf_multicontroller 00:29:21.886 ************************************ 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:21.886 * Looking for test storage... 00:29:21.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:21.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.886 --rc genhtml_branch_coverage=1 00:29:21.886 --rc genhtml_function_coverage=1 00:29:21.886 --rc genhtml_legend=1 00:29:21.886 --rc geninfo_all_blocks=1 00:29:21.886 --rc geninfo_unexecuted_blocks=1 00:29:21.886 00:29:21.886 ' 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:21.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.886 --rc genhtml_branch_coverage=1 00:29:21.886 --rc genhtml_function_coverage=1 00:29:21.886 --rc genhtml_legend=1 00:29:21.886 --rc geninfo_all_blocks=1 00:29:21.886 --rc geninfo_unexecuted_blocks=1 00:29:21.886 00:29:21.886 ' 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:21.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.886 --rc genhtml_branch_coverage=1 00:29:21.886 --rc genhtml_function_coverage=1 00:29:21.886 --rc genhtml_legend=1 00:29:21.886 --rc geninfo_all_blocks=1 00:29:21.886 --rc geninfo_unexecuted_blocks=1 00:29:21.886 00:29:21.886 ' 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:21.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.886 --rc genhtml_branch_coverage=1 00:29:21.886 --rc genhtml_function_coverage=1 00:29:21.886 --rc genhtml_legend=1 00:29:21.886 --rc geninfo_all_blocks=1 00:29:21.886 --rc geninfo_unexecuted_blocks=1 00:29:21.886 00:29:21.886 ' 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.886 22:52:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.886 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:21.886 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:21.886 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.886 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:21.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:21.887 22:52:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:24.436 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:24.437 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:24.437 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:24.437 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:24.437 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:29:24.437 00:29:24.437 --- 10.0.0.2 ping statistics --- 00:29:24.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.437 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:29:24.437 00:29:24.437 --- 10.0.0.1 ping statistics --- 00:29:24.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.437 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:24.437 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=330231 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 330231 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 330231 ']' 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.438 [2024-10-11 22:52:27.346354] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:29:24.438 [2024-10-11 22:52:27.346431] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.438 [2024-10-11 22:52:27.412049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:24.438 [2024-10-11 22:52:27.458384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.438 [2024-10-11 22:52:27.458435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.438 [2024-10-11 22:52:27.458459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.438 [2024-10-11 22:52:27.458470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.438 [2024-10-11 22:52:27.458479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.438 [2024-10-11 22:52:27.460010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.438 [2024-10-11 22:52:27.461570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.438 [2024-10-11 22:52:27.461582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.438 [2024-10-11 22:52:27.606612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.438 Malloc0 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.438 [2024-10-11 22:52:27.661443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.438 [2024-10-11 22:52:27.669353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.438 Malloc1 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.438 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=330263 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 330263 /var/tmp/bdevperf.sock 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 330263 ']' 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:24.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.697 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.955 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.955 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:24.955 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:24.955 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.955 22:52:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.213 NVMe0n1 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.213 1 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.213 request: 00:29:25.213 { 00:29:25.213 "name": "NVMe0", 00:29:25.213 "trtype": "tcp", 00:29:25.213 "traddr": "10.0.0.2", 00:29:25.213 "adrfam": "ipv4", 00:29:25.213 "trsvcid": "4420", 00:29:25.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.213 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:25.213 "hostaddr": "10.0.0.1", 00:29:25.213 "prchk_reftag": false, 00:29:25.213 "prchk_guard": false, 00:29:25.213 "hdgst": false, 00:29:25.213 "ddgst": false, 00:29:25.213 "allow_unrecognized_csi": false, 00:29:25.213 "method": "bdev_nvme_attach_controller", 00:29:25.213 "req_id": 1 00:29:25.213 } 00:29:25.213 Got JSON-RPC error response 00:29:25.213 response: 00:29:25.213 { 00:29:25.213 "code": -114, 00:29:25.213 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:25.213 } 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.213 request: 00:29:25.213 { 00:29:25.213 "name": "NVMe0", 00:29:25.213 "trtype": "tcp", 00:29:25.213 "traddr": "10.0.0.2", 00:29:25.213 "adrfam": "ipv4", 00:29:25.213 "trsvcid": "4420", 00:29:25.213 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:25.213 "hostaddr": "10.0.0.1", 00:29:25.213 "prchk_reftag": false, 00:29:25.213 "prchk_guard": false, 00:29:25.213 "hdgst": false, 00:29:25.213 "ddgst": false, 00:29:25.213 "allow_unrecognized_csi": false, 00:29:25.213 "method": "bdev_nvme_attach_controller", 00:29:25.213 "req_id": 1 00:29:25.213 } 00:29:25.213 Got JSON-RPC error response 00:29:25.213 response: 00:29:25.213 { 00:29:25.213 "code": -114, 00:29:25.213 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:25.213 } 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:25.213 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.214 request: 00:29:25.214 { 00:29:25.214 "name": "NVMe0", 00:29:25.214 "trtype": "tcp", 00:29:25.214 "traddr": "10.0.0.2", 00:29:25.214 "adrfam": "ipv4", 00:29:25.214 "trsvcid": "4420", 00:29:25.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.214 "hostaddr": "10.0.0.1", 00:29:25.214 "prchk_reftag": false, 00:29:25.214 "prchk_guard": false, 00:29:25.214 "hdgst": false, 00:29:25.214 "ddgst": false, 00:29:25.214 "multipath": "disable", 00:29:25.214 "allow_unrecognized_csi": false, 00:29:25.214 "method": "bdev_nvme_attach_controller", 00:29:25.214 "req_id": 1 00:29:25.214 } 00:29:25.214 Got JSON-RPC error response 00:29:25.214 response: 00:29:25.214 { 00:29:25.214 "code": -114, 00:29:25.214 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:25.214 } 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.214 request: 00:29:25.214 { 00:29:25.214 "name": "NVMe0", 00:29:25.214 "trtype": "tcp", 00:29:25.214 "traddr": "10.0.0.2", 00:29:25.214 "adrfam": "ipv4", 00:29:25.214 "trsvcid": "4420", 00:29:25.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.214 "hostaddr": "10.0.0.1", 00:29:25.214 "prchk_reftag": false, 00:29:25.214 "prchk_guard": false, 00:29:25.214 "hdgst": false, 00:29:25.214 "ddgst": false, 00:29:25.214 "multipath": "failover", 00:29:25.214 "allow_unrecognized_csi": false, 00:29:25.214 "method": "bdev_nvme_attach_controller", 00:29:25.214 "req_id": 1 00:29:25.214 } 00:29:25.214 Got JSON-RPC error response 00:29:25.214 response: 00:29:25.214 { 00:29:25.214 "code": -114, 00:29:25.214 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:25.214 } 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.214 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.473 NVMe0n1 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.473 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.473 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.731 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.731 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:25.731 22:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:26.664 { 00:29:26.664 "results": [ 00:29:26.664 { 00:29:26.664 "job": "NVMe0n1", 00:29:26.664 "core_mask": "0x1", 00:29:26.664 "workload": "write", 00:29:26.664 "status": "finished", 00:29:26.664 "queue_depth": 128, 00:29:26.664 "io_size": 4096, 00:29:26.664 "runtime": 1.00618, 00:29:26.664 "iops": 18177.662048540024, 00:29:26.664 "mibps": 71.00649237710947, 00:29:26.664 "io_failed": 0, 00:29:26.664 "io_timeout": 0, 00:29:26.664 "avg_latency_us": 7030.3573799890655, 00:29:26.664 "min_latency_us": 5946.785185185186, 00:29:26.664 "max_latency_us": 13398.471111111112 00:29:26.664 } 00:29:26.664 ], 00:29:26.664 "core_count": 1 00:29:26.664 } 00:29:26.664 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:26.664 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.664 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:26.664 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.664 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:26.664 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 330263 00:29:26.664 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 330263 ']' 00:29:26.664 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 330263 00:29:26.664 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:26.664 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:26.664 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 330263 00:29:26.923 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:26.923 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:26.923 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 330263' 00:29:26.923 killing process with pid 330263 00:29:26.923 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 330263 00:29:26.923 22:52:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 330263 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:29:26.923 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:26.923 [2024-10-11 22:52:27.769449] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:29:26.923 [2024-10-11 22:52:27.769582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330263 ] 00:29:26.923 [2024-10-11 22:52:27.830659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.923 [2024-10-11 22:52:27.877360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.923 [2024-10-11 22:52:28.732028] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 132845c3-e402-4457-bf4d-43a762e0db48 already exists 00:29:26.923 [2024-10-11 22:52:28.732069] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:132845c3-e402-4457-bf4d-43a762e0db48 alias for bdev NVMe1n1 00:29:26.923 [2024-10-11 22:52:28.732093] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:26.923 Running I/O for 1 seconds... 00:29:26.923 18162.00 IOPS, 70.95 MiB/s 00:29:26.923 Latency(us) 00:29:26.923 [2024-10-11T20:52:30.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.923 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:26.923 NVMe0n1 : 1.01 18177.66 71.01 0.00 0.00 7030.36 5946.79 13398.47 00:29:26.923 [2024-10-11T20:52:30.191Z] =================================================================================================================== 00:29:26.923 [2024-10-11T20:52:30.191Z] Total : 18177.66 71.01 0.00 0.00 7030.36 5946.79 13398.47 00:29:26.923 Received shutdown signal, test time was about 1.000000 seconds 00:29:26.923 00:29:26.923 Latency(us) 00:29:26.923 [2024-10-11T20:52:30.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.923 [2024-10-11T20:52:30.191Z] =================================================================================================================== 00:29:26.923 [2024-10-11T20:52:30.191Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:26.923 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.923 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.923 rmmod nvme_tcp 00:29:27.181 rmmod nvme_fabrics 00:29:27.181 rmmod nvme_keyring 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 330231 ']' 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 330231 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 330231 ']' 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 330231 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 330231 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 330231' 00:29:27.181 killing process with pid 330231 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 330231 00:29:27.181 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 330231 00:29:27.441 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:27.441 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:27.441 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:27.441 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:27.441 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:29:27.441 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:27.441 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:29:27.441 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.441 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.441 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.441 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.441 22:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.344 22:52:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.344 00:29:29.344 real 0m7.733s 00:29:29.344 user 0m12.474s 00:29:29.344 sys 0m2.422s 00:29:29.344 22:52:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:29.344 22:52:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:29.344 ************************************ 00:29:29.344 END TEST nvmf_multicontroller 00:29:29.344 ************************************ 00:29:29.344 22:52:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:29.344 22:52:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:29.344 22:52:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:29.344 22:52:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.604 ************************************ 00:29:29.604 START TEST nvmf_aer 00:29:29.604 ************************************ 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:29.604 * Looking for test storage... 00:29:29.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:29.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.604 --rc genhtml_branch_coverage=1 00:29:29.604 --rc genhtml_function_coverage=1 00:29:29.604 --rc genhtml_legend=1 00:29:29.604 --rc geninfo_all_blocks=1 00:29:29.604 --rc geninfo_unexecuted_blocks=1 00:29:29.604 00:29:29.604 ' 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:29.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.604 --rc genhtml_branch_coverage=1 00:29:29.604 --rc genhtml_function_coverage=1 00:29:29.604 --rc genhtml_legend=1 00:29:29.604 --rc geninfo_all_blocks=1 00:29:29.604 --rc geninfo_unexecuted_blocks=1 00:29:29.604 00:29:29.604 ' 00:29:29.604 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:29.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.605 --rc genhtml_branch_coverage=1 00:29:29.605 --rc genhtml_function_coverage=1 00:29:29.605 --rc genhtml_legend=1 00:29:29.605 --rc geninfo_all_blocks=1 00:29:29.605 --rc geninfo_unexecuted_blocks=1 00:29:29.605 00:29:29.605 ' 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:29.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.605 --rc genhtml_branch_coverage=1 00:29:29.605 --rc genhtml_function_coverage=1 00:29:29.605 --rc genhtml_legend=1 00:29:29.605 --rc geninfo_all_blocks=1 00:29:29.605 --rc geninfo_unexecuted_blocks=1 00:29:29.605 00:29:29.605 ' 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.605 22:52:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:32.136 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:32.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:32.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:32.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:32.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.137 22:52:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:32.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:29:32.137 00:29:32.137 --- 10.0.0.2 ping statistics --- 00:29:32.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.137 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:29:32.137 00:29:32.137 --- 10.0.0.1 ping statistics --- 00:29:32.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.137 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=332481 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 332481 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 332481 ']' 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:32.137 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.137 [2024-10-11 22:52:35.194080] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:29:32.137 [2024-10-11 22:52:35.194169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.137 [2024-10-11 22:52:35.266927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:32.137 [2024-10-11 22:52:35.318379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.137 [2024-10-11 22:52:35.318445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.137 [2024-10-11 22:52:35.318459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.138 [2024-10-11 22:52:35.318471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.138 [2024-10-11 22:52:35.318482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.138 [2024-10-11 22:52:35.320051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.138 [2024-10-11 22:52:35.320077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.138 [2024-10-11 22:52:35.320140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.138 [2024-10-11 22:52:35.320143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.396 [2024-10-11 22:52:35.464477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.396 Malloc0 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.396 [2024-10-11 22:52:35.540657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.396 [ 00:29:32.396 { 00:29:32.396 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:32.396 "subtype": "Discovery", 00:29:32.396 "listen_addresses": [], 00:29:32.396 "allow_any_host": true, 00:29:32.396 "hosts": [] 00:29:32.396 }, 00:29:32.396 { 00:29:32.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.396 "subtype": "NVMe", 00:29:32.396 "listen_addresses": [ 00:29:32.396 { 00:29:32.396 "trtype": "TCP", 00:29:32.396 "adrfam": "IPv4", 00:29:32.396 "traddr": "10.0.0.2", 00:29:32.396 "trsvcid": "4420" 00:29:32.396 } 00:29:32.396 ], 00:29:32.396 "allow_any_host": true, 00:29:32.396 "hosts": [], 00:29:32.396 "serial_number": "SPDK00000000000001", 00:29:32.396 "model_number": "SPDK bdev Controller", 00:29:32.396 "max_namespaces": 2, 00:29:32.396 "min_cntlid": 1, 00:29:32.396 "max_cntlid": 65519, 00:29:32.396 "namespaces": [ 00:29:32.396 { 00:29:32.396 "nsid": 1, 00:29:32.396 "bdev_name": "Malloc0", 00:29:32.396 "name": "Malloc0", 00:29:32.396 "nguid": "D421ECF9E3B64B17885487E59A8975F3", 00:29:32.396 "uuid": "d421ecf9-e3b6-4b17-8854-87e59a8975f3" 00:29:32.396 } 00:29:32.396 ] 00:29:32.396 } 00:29:32.396 ] 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=332625 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:32.396 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.654 Malloc1 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.654 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:32.912 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.912 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.912 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.912 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:32.912 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.912 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.912 [ 00:29:32.912 { 00:29:32.912 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:32.912 "subtype": "Discovery", 00:29:32.912 "listen_addresses": [], 00:29:32.912 "allow_any_host": true, 00:29:32.912 "hosts": [] 00:29:32.912 }, 00:29:32.912 { 00:29:32.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.912 "subtype": "NVMe", 00:29:32.912 "listen_addresses": [ 00:29:32.912 { 00:29:32.912 "trtype": "TCP", 00:29:32.912 "adrfam": "IPv4", 00:29:32.912 "traddr": "10.0.0.2", 00:29:32.912 "trsvcid": "4420" 00:29:32.912 } 00:29:32.912 ], 00:29:32.912 "allow_any_host": true, 00:29:32.912 "hosts": [], 00:29:32.912 "serial_number": "SPDK00000000000001", 00:29:32.912 "model_number": "SPDK bdev Controller", 00:29:32.912 "max_namespaces": 2, 00:29:32.912 "min_cntlid": 1, 00:29:32.912 "max_cntlid": 65519, 00:29:32.912 "namespaces": [ 00:29:32.912 { 00:29:32.912 "nsid": 1, 00:29:32.912 "bdev_name": "Malloc0", 00:29:32.912 "name": "Malloc0", 00:29:32.912 "nguid": "D421ECF9E3B64B17885487E59A8975F3", 00:29:32.912 "uuid": "d421ecf9-e3b6-4b17-8854-87e59a8975f3" 00:29:32.912 }, 00:29:32.912 { 00:29:32.913 "nsid": 2, 00:29:32.913 "bdev_name": "Malloc1", 00:29:32.913 "name": "Malloc1", 00:29:32.913 "nguid": "075BB14AF3BC49C7BABB6AAC19F61CC3", 00:29:32.913 Asynchronous Event Request test 00:29:32.913 Attaching to 10.0.0.2 00:29:32.913 Attached to 10.0.0.2 00:29:32.913 Registering asynchronous event callbacks... 00:29:32.913 Starting namespace attribute notice tests for all controllers... 00:29:32.913 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:32.913 aer_cb - Changed Namespace 00:29:32.913 Cleaning up... 00:29:32.913 "uuid": "075bb14a-f3bc-49c7-babb-6aac19f61cc3" 00:29:32.913 } 00:29:32.913 ] 00:29:32.913 } 00:29:32.913 ] 00:29:32.913 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.913 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 332625 00:29:32.913 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:32.913 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.913 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.913 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.913 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:32.913 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.913 22:52:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:32.913 rmmod nvme_tcp 00:29:32.913 rmmod nvme_fabrics 00:29:32.913 rmmod nvme_keyring 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 332481 ']' 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 332481 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 332481 ']' 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 332481 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 332481 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 332481' 00:29:32.913 killing process with pid 332481 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 332481 00:29:32.913 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 332481 00:29:33.171 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:33.171 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:33.171 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:33.171 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:33.171 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:29:33.171 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:33.171 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:29:33.171 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:33.171 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:33.171 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.171 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:33.171 22:52:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:35.707 00:29:35.707 real 0m5.738s 00:29:35.707 user 0m4.893s 00:29:35.707 sys 0m2.053s 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:35.707 ************************************ 00:29:35.707 END TEST nvmf_aer 00:29:35.707 ************************************ 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.707 ************************************ 00:29:35.707 START TEST nvmf_async_init 00:29:35.707 ************************************ 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:35.707 * Looking for test storage... 00:29:35.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.707 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:35.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.708 --rc genhtml_branch_coverage=1 00:29:35.708 --rc genhtml_function_coverage=1 00:29:35.708 --rc genhtml_legend=1 00:29:35.708 --rc geninfo_all_blocks=1 00:29:35.708 --rc geninfo_unexecuted_blocks=1 00:29:35.708 00:29:35.708 ' 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:35.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.708 --rc genhtml_branch_coverage=1 00:29:35.708 --rc genhtml_function_coverage=1 00:29:35.708 --rc genhtml_legend=1 00:29:35.708 --rc geninfo_all_blocks=1 00:29:35.708 --rc geninfo_unexecuted_blocks=1 00:29:35.708 00:29:35.708 ' 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:35.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.708 --rc genhtml_branch_coverage=1 00:29:35.708 --rc genhtml_function_coverage=1 00:29:35.708 --rc genhtml_legend=1 00:29:35.708 --rc geninfo_all_blocks=1 00:29:35.708 --rc geninfo_unexecuted_blocks=1 00:29:35.708 00:29:35.708 ' 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:35.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.708 --rc genhtml_branch_coverage=1 00:29:35.708 --rc genhtml_function_coverage=1 00:29:35.708 --rc genhtml_legend=1 00:29:35.708 --rc geninfo_all_blocks=1 00:29:35.708 --rc geninfo_unexecuted_blocks=1 00:29:35.708 00:29:35.708 ' 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:35.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d1ce217d6c4f4368b0b12b94e97cf961 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:35.708 22:52:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:37.611 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:37.611 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:37.611 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:37.611 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:37.611 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:37.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:29:37.870 00:29:37.870 --- 10.0.0.2 ping statistics --- 00:29:37.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.870 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:29:37.870 00:29:37.870 --- 10.0.0.1 ping statistics --- 00:29:37.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.870 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=334636 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 334636 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 334636 ']' 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:37.870 22:52:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:37.870 [2024-10-11 22:52:40.992108] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:29:37.870 [2024-10-11 22:52:40.992178] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.870 [2024-10-11 22:52:41.055247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.870 [2024-10-11 22:52:41.102029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.870 [2024-10-11 22:52:41.102072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.870 [2024-10-11 22:52:41.102096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.870 [2024-10-11 22:52:41.102107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.870 [2024-10-11 22:52:41.102116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.870 [2024-10-11 22:52:41.102735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.128 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:38.128 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:29:38.128 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.129 [2024-10-11 22:52:41.254519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.129 null0 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d1ce217d6c4f4368b0b12b94e97cf961 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.129 [2024-10-11 22:52:41.294840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.129 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.387 nvme0n1 00:29:38.387 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.387 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:38.387 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.387 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.387 [ 00:29:38.387 { 00:29:38.387 "name": "nvme0n1", 00:29:38.387 "aliases": [ 00:29:38.387 "d1ce217d-6c4f-4368-b0b1-2b94e97cf961" 00:29:38.387 ], 00:29:38.387 "product_name": "NVMe disk", 00:29:38.387 "block_size": 512, 00:29:38.387 "num_blocks": 2097152, 00:29:38.387 "uuid": "d1ce217d-6c4f-4368-b0b1-2b94e97cf961", 00:29:38.387 "numa_id": 0, 00:29:38.387 "assigned_rate_limits": { 00:29:38.387 "rw_ios_per_sec": 0, 00:29:38.387 "rw_mbytes_per_sec": 0, 00:29:38.387 "r_mbytes_per_sec": 0, 00:29:38.387 "w_mbytes_per_sec": 0 00:29:38.387 }, 00:29:38.387 "claimed": false, 00:29:38.387 "zoned": false, 00:29:38.387 "supported_io_types": { 00:29:38.387 "read": true, 00:29:38.387 "write": true, 00:29:38.387 "unmap": false, 00:29:38.387 "flush": true, 00:29:38.387 "reset": true, 00:29:38.387 "nvme_admin": true, 00:29:38.387 "nvme_io": true, 00:29:38.387 "nvme_io_md": false, 00:29:38.387 "write_zeroes": true, 00:29:38.387 "zcopy": false, 00:29:38.387 "get_zone_info": false, 00:29:38.387 "zone_management": false, 00:29:38.387 "zone_append": false, 00:29:38.387 "compare": true, 00:29:38.387 "compare_and_write": true, 00:29:38.387 "abort": true, 00:29:38.387 "seek_hole": false, 00:29:38.387 "seek_data": false, 00:29:38.387 "copy": true, 00:29:38.387 "nvme_iov_md": false 00:29:38.387 }, 00:29:38.387 "memory_domains": [ 00:29:38.387 { 00:29:38.387 "dma_device_id": "system", 00:29:38.387 "dma_device_type": 1 00:29:38.387 } 00:29:38.387 ], 00:29:38.387 "driver_specific": { 00:29:38.387 "nvme": [ 00:29:38.387 { 00:29:38.387 "trid": { 00:29:38.387 "trtype": "TCP", 00:29:38.387 "adrfam": "IPv4", 00:29:38.387 "traddr": "10.0.0.2", 00:29:38.387 "trsvcid": "4420", 00:29:38.387 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:38.387 }, 00:29:38.387 "ctrlr_data": { 00:29:38.387 "cntlid": 1, 00:29:38.387 "vendor_id": "0x8086", 00:29:38.387 "model_number": "SPDK bdev Controller", 00:29:38.387 "serial_number": "00000000000000000000", 00:29:38.387 "firmware_revision": "25.01", 00:29:38.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:38.387 "oacs": { 00:29:38.387 "security": 0, 00:29:38.387 "format": 0, 00:29:38.387 "firmware": 0, 00:29:38.387 "ns_manage": 0 00:29:38.387 }, 00:29:38.387 "multi_ctrlr": true, 00:29:38.387 "ana_reporting": false 00:29:38.387 }, 00:29:38.387 "vs": { 00:29:38.387 "nvme_version": "1.3" 00:29:38.387 }, 00:29:38.387 "ns_data": { 00:29:38.387 "id": 1, 00:29:38.387 "can_share": true 00:29:38.387 } 00:29:38.387 } 00:29:38.387 ], 00:29:38.387 "mp_policy": "active_passive" 00:29:38.387 } 00:29:38.387 } 00:29:38.387 ] 00:29:38.387 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.387 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:38.387 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.387 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.387 [2024-10-11 22:52:41.544189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:38.387 [2024-10-11 22:52:41.544267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a9560 (9): Bad file descriptor 00:29:38.646 [2024-10-11 22:52:41.676716] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.646 [ 00:29:38.646 { 00:29:38.646 "name": "nvme0n1", 00:29:38.646 "aliases": [ 00:29:38.646 "d1ce217d-6c4f-4368-b0b1-2b94e97cf961" 00:29:38.646 ], 00:29:38.646 "product_name": "NVMe disk", 00:29:38.646 "block_size": 512, 00:29:38.646 "num_blocks": 2097152, 00:29:38.646 "uuid": "d1ce217d-6c4f-4368-b0b1-2b94e97cf961", 00:29:38.646 "numa_id": 0, 00:29:38.646 "assigned_rate_limits": { 00:29:38.646 "rw_ios_per_sec": 0, 00:29:38.646 "rw_mbytes_per_sec": 0, 00:29:38.646 "r_mbytes_per_sec": 0, 00:29:38.646 "w_mbytes_per_sec": 0 00:29:38.646 }, 00:29:38.646 "claimed": false, 00:29:38.646 "zoned": false, 00:29:38.646 "supported_io_types": { 00:29:38.646 "read": true, 00:29:38.646 "write": true, 00:29:38.646 "unmap": false, 00:29:38.646 "flush": true, 00:29:38.646 "reset": true, 00:29:38.646 "nvme_admin": true, 00:29:38.646 "nvme_io": true, 00:29:38.646 "nvme_io_md": false, 00:29:38.646 "write_zeroes": true, 00:29:38.646 "zcopy": false, 00:29:38.646 "get_zone_info": false, 00:29:38.646 "zone_management": false, 00:29:38.646 "zone_append": false, 00:29:38.646 "compare": true, 00:29:38.646 "compare_and_write": true, 00:29:38.646 "abort": true, 00:29:38.646 "seek_hole": false, 00:29:38.646 "seek_data": false, 00:29:38.646 "copy": true, 00:29:38.646 "nvme_iov_md": false 00:29:38.646 }, 00:29:38.646 "memory_domains": [ 00:29:38.646 { 00:29:38.646 "dma_device_id": "system", 00:29:38.646 "dma_device_type": 1 00:29:38.646 } 00:29:38.646 ], 00:29:38.646 "driver_specific": { 00:29:38.646 "nvme": [ 00:29:38.646 { 00:29:38.646 "trid": { 00:29:38.646 "trtype": "TCP", 00:29:38.646 "adrfam": "IPv4", 00:29:38.646 "traddr": "10.0.0.2", 00:29:38.646 "trsvcid": "4420", 00:29:38.646 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:38.646 }, 00:29:38.646 "ctrlr_data": { 00:29:38.646 "cntlid": 2, 00:29:38.646 "vendor_id": "0x8086", 00:29:38.646 "model_number": "SPDK bdev Controller", 00:29:38.646 "serial_number": "00000000000000000000", 00:29:38.646 "firmware_revision": "25.01", 00:29:38.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:38.646 "oacs": { 00:29:38.646 "security": 0, 00:29:38.646 "format": 0, 00:29:38.646 "firmware": 0, 00:29:38.646 "ns_manage": 0 00:29:38.646 }, 00:29:38.646 "multi_ctrlr": true, 00:29:38.646 "ana_reporting": false 00:29:38.646 }, 00:29:38.646 "vs": { 00:29:38.646 "nvme_version": "1.3" 00:29:38.646 }, 00:29:38.646 "ns_data": { 00:29:38.646 "id": 1, 00:29:38.646 "can_share": true 00:29:38.646 } 00:29:38.646 } 00:29:38.646 ], 00:29:38.646 "mp_policy": "active_passive" 00:29:38.646 } 00:29:38.646 } 00:29:38.646 ] 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.fzn3Iexrtw 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.fzn3Iexrtw 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.fzn3Iexrtw 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.646 [2024-10-11 22:52:41.736884] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:38.646 [2024-10-11 22:52:41.737024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.646 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.647 [2024-10-11 22:52:41.752928] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:38.647 nvme0n1 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.647 [ 00:29:38.647 { 00:29:38.647 "name": "nvme0n1", 00:29:38.647 "aliases": [ 00:29:38.647 "d1ce217d-6c4f-4368-b0b1-2b94e97cf961" 00:29:38.647 ], 00:29:38.647 "product_name": "NVMe disk", 00:29:38.647 "block_size": 512, 00:29:38.647 "num_blocks": 2097152, 00:29:38.647 "uuid": "d1ce217d-6c4f-4368-b0b1-2b94e97cf961", 00:29:38.647 "numa_id": 0, 00:29:38.647 "assigned_rate_limits": { 00:29:38.647 "rw_ios_per_sec": 0, 00:29:38.647 "rw_mbytes_per_sec": 0, 00:29:38.647 "r_mbytes_per_sec": 0, 00:29:38.647 "w_mbytes_per_sec": 0 00:29:38.647 }, 00:29:38.647 "claimed": false, 00:29:38.647 "zoned": false, 00:29:38.647 "supported_io_types": { 00:29:38.647 "read": true, 00:29:38.647 "write": true, 00:29:38.647 "unmap": false, 00:29:38.647 "flush": true, 00:29:38.647 "reset": true, 00:29:38.647 "nvme_admin": true, 00:29:38.647 "nvme_io": true, 00:29:38.647 "nvme_io_md": false, 00:29:38.647 "write_zeroes": true, 00:29:38.647 "zcopy": false, 00:29:38.647 "get_zone_info": false, 00:29:38.647 "zone_management": false, 00:29:38.647 "zone_append": false, 00:29:38.647 "compare": true, 00:29:38.647 "compare_and_write": true, 00:29:38.647 "abort": true, 00:29:38.647 "seek_hole": false, 00:29:38.647 "seek_data": false, 00:29:38.647 "copy": true, 00:29:38.647 "nvme_iov_md": false 00:29:38.647 }, 00:29:38.647 "memory_domains": [ 00:29:38.647 { 00:29:38.647 "dma_device_id": "system", 00:29:38.647 "dma_device_type": 1 00:29:38.647 } 00:29:38.647 ], 00:29:38.647 "driver_specific": { 00:29:38.647 "nvme": [ 00:29:38.647 { 00:29:38.647 "trid": { 00:29:38.647 "trtype": "TCP", 00:29:38.647 "adrfam": "IPv4", 00:29:38.647 "traddr": "10.0.0.2", 00:29:38.647 "trsvcid": "4421", 00:29:38.647 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:38.647 }, 00:29:38.647 "ctrlr_data": { 00:29:38.647 "cntlid": 3, 00:29:38.647 "vendor_id": "0x8086", 00:29:38.647 "model_number": "SPDK bdev Controller", 00:29:38.647 "serial_number": "00000000000000000000", 00:29:38.647 "firmware_revision": "25.01", 00:29:38.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:38.647 "oacs": { 00:29:38.647 "security": 0, 00:29:38.647 "format": 0, 00:29:38.647 "firmware": 0, 00:29:38.647 "ns_manage": 0 00:29:38.647 }, 00:29:38.647 "multi_ctrlr": true, 00:29:38.647 "ana_reporting": false 00:29:38.647 }, 00:29:38.647 "vs": { 00:29:38.647 "nvme_version": "1.3" 00:29:38.647 }, 00:29:38.647 "ns_data": { 00:29:38.647 "id": 1, 00:29:38.647 "can_share": true 00:29:38.647 } 00:29:38.647 } 00:29:38.647 ], 00:29:38.647 "mp_policy": "active_passive" 00:29:38.647 } 00:29:38.647 } 00:29:38.647 ] 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.fzn3Iexrtw 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:38.647 rmmod nvme_tcp 00:29:38.647 rmmod nvme_fabrics 00:29:38.647 rmmod nvme_keyring 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 334636 ']' 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 334636 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 334636 ']' 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 334636 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:38.647 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 334636 00:29:38.906 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:38.906 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:38.906 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 334636' 00:29:38.906 killing process with pid 334636 00:29:38.906 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 334636 00:29:38.906 22:52:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 334636 00:29:38.906 22:52:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:38.906 22:52:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:38.906 22:52:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:38.906 22:52:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:38.906 22:52:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:29:38.906 22:52:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:38.906 22:52:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:29:38.906 22:52:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:38.906 22:52:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:38.906 22:52:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.906 22:52:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.906 22:52:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:41.443 00:29:41.443 real 0m5.722s 00:29:41.443 user 0m2.198s 00:29:41.443 sys 0m1.961s 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:41.443 ************************************ 00:29:41.443 END TEST nvmf_async_init 00:29:41.443 ************************************ 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.443 ************************************ 00:29:41.443 START TEST dma 00:29:41.443 ************************************ 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:41.443 * Looking for test storage... 00:29:41.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:41.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.443 --rc genhtml_branch_coverage=1 00:29:41.443 --rc genhtml_function_coverage=1 00:29:41.443 --rc genhtml_legend=1 00:29:41.443 --rc geninfo_all_blocks=1 00:29:41.443 --rc geninfo_unexecuted_blocks=1 00:29:41.443 00:29:41.443 ' 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:41.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.443 --rc genhtml_branch_coverage=1 00:29:41.443 --rc genhtml_function_coverage=1 00:29:41.443 --rc genhtml_legend=1 00:29:41.443 --rc geninfo_all_blocks=1 00:29:41.443 --rc geninfo_unexecuted_blocks=1 00:29:41.443 00:29:41.443 ' 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:41.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.443 --rc genhtml_branch_coverage=1 00:29:41.443 --rc genhtml_function_coverage=1 00:29:41.443 --rc genhtml_legend=1 00:29:41.443 --rc geninfo_all_blocks=1 00:29:41.443 --rc geninfo_unexecuted_blocks=1 00:29:41.443 00:29:41.443 ' 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:41.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.443 --rc genhtml_branch_coverage=1 00:29:41.443 --rc genhtml_function_coverage=1 00:29:41.443 --rc genhtml_legend=1 00:29:41.443 --rc geninfo_all_blocks=1 00:29:41.443 --rc geninfo_unexecuted_blocks=1 00:29:41.443 00:29:41.443 ' 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.443 22:52:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:41.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:41.444 00:29:41.444 real 0m0.173s 00:29:41.444 user 0m0.118s 00:29:41.444 sys 0m0.065s 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:41.444 ************************************ 00:29:41.444 END TEST dma 00:29:41.444 ************************************ 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.444 ************************************ 00:29:41.444 START TEST nvmf_identify 00:29:41.444 ************************************ 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:41.444 * Looking for test storage... 00:29:41.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:41.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.444 --rc genhtml_branch_coverage=1 00:29:41.444 --rc genhtml_function_coverage=1 00:29:41.444 --rc genhtml_legend=1 00:29:41.444 --rc geninfo_all_blocks=1 00:29:41.444 --rc geninfo_unexecuted_blocks=1 00:29:41.444 00:29:41.444 ' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:41.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.444 --rc genhtml_branch_coverage=1 00:29:41.444 --rc genhtml_function_coverage=1 00:29:41.444 --rc genhtml_legend=1 00:29:41.444 --rc geninfo_all_blocks=1 00:29:41.444 --rc geninfo_unexecuted_blocks=1 00:29:41.444 00:29:41.444 ' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:41.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.444 --rc genhtml_branch_coverage=1 00:29:41.444 --rc genhtml_function_coverage=1 00:29:41.444 --rc genhtml_legend=1 00:29:41.444 --rc geninfo_all_blocks=1 00:29:41.444 --rc geninfo_unexecuted_blocks=1 00:29:41.444 00:29:41.444 ' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:41.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.444 --rc genhtml_branch_coverage=1 00:29:41.444 --rc genhtml_function_coverage=1 00:29:41.444 --rc genhtml_legend=1 00:29:41.444 --rc geninfo_all_blocks=1 00:29:41.444 --rc geninfo_unexecuted_blocks=1 00:29:41.444 00:29:41.444 ' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.444 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:41.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:41.445 22:52:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.977 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:43.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:43.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:43.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:43.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:29:43.978 00:29:43.978 --- 10.0.0.2 ping statistics --- 00:29:43.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.978 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:29:43.978 00:29:43.978 --- 10.0.0.1 ping statistics --- 00:29:43.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.978 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=336831 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 336831 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 336831 ']' 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:43.978 22:52:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:43.978 [2024-10-11 22:52:47.026930] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:29:43.978 [2024-10-11 22:52:47.027026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.978 [2024-10-11 22:52:47.093480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:43.978 [2024-10-11 22:52:47.139474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.978 [2024-10-11 22:52:47.139546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.978 [2024-10-11 22:52:47.139577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.978 [2024-10-11 22:52:47.139589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.978 [2024-10-11 22:52:47.139599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.978 [2024-10-11 22:52:47.141087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.978 [2024-10-11 22:52:47.141153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.978 [2024-10-11 22:52:47.141263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:43.978 [2024-10-11 22:52:47.141265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:44.238 [2024-10-11 22:52:47.259188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:44.238 Malloc0 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:44.238 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:44.239 [2024-10-11 22:52:47.345482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:44.239 [ 00:29:44.239 { 00:29:44.239 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:44.239 "subtype": "Discovery", 00:29:44.239 "listen_addresses": [ 00:29:44.239 { 00:29:44.239 "trtype": "TCP", 00:29:44.239 "adrfam": "IPv4", 00:29:44.239 "traddr": "10.0.0.2", 00:29:44.239 "trsvcid": "4420" 00:29:44.239 } 00:29:44.239 ], 00:29:44.239 "allow_any_host": true, 00:29:44.239 "hosts": [] 00:29:44.239 }, 00:29:44.239 { 00:29:44.239 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.239 "subtype": "NVMe", 00:29:44.239 "listen_addresses": [ 00:29:44.239 { 00:29:44.239 "trtype": "TCP", 00:29:44.239 "adrfam": "IPv4", 00:29:44.239 "traddr": "10.0.0.2", 00:29:44.239 "trsvcid": "4420" 00:29:44.239 } 00:29:44.239 ], 00:29:44.239 "allow_any_host": true, 00:29:44.239 "hosts": [], 00:29:44.239 "serial_number": "SPDK00000000000001", 00:29:44.239 "model_number": "SPDK bdev Controller", 00:29:44.239 "max_namespaces": 32, 00:29:44.239 "min_cntlid": 1, 00:29:44.239 "max_cntlid": 65519, 00:29:44.239 "namespaces": [ 00:29:44.239 { 00:29:44.239 "nsid": 1, 00:29:44.239 "bdev_name": "Malloc0", 00:29:44.239 "name": "Malloc0", 00:29:44.239 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:44.239 "eui64": "ABCDEF0123456789", 00:29:44.239 "uuid": "1a95aecb-a9d6-4423-8fa6-b26667f03dc1" 00:29:44.239 } 00:29:44.239 ] 00:29:44.239 } 00:29:44.239 ] 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.239 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:44.239 [2024-10-11 22:52:47.383328] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:29:44.239 [2024-10-11 22:52:47.383365] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336857 ] 00:29:44.239 [2024-10-11 22:52:47.414818] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:44.239 [2024-10-11 22:52:47.414889] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:44.239 [2024-10-11 22:52:47.414900] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:44.239 [2024-10-11 22:52:47.414914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:44.239 [2024-10-11 22:52:47.414927] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:44.239 [2024-10-11 22:52:47.418971] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:44.239 [2024-10-11 22:52:47.419033] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1877210 0 00:29:44.239 [2024-10-11 22:52:47.426570] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:44.239 [2024-10-11 22:52:47.426591] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:44.239 [2024-10-11 22:52:47.426600] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:44.239 [2024-10-11 22:52:47.426605] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:44.239 [2024-10-11 22:52:47.426656] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.426669] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.426676] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1877210) 00:29:44.239 [2024-10-11 22:52:47.426694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:44.239 [2024-10-11 22:52:47.426720] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1440, cid 0, qid 0 00:29:44.239 [2024-10-11 22:52:47.433565] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.239 [2024-10-11 22:52:47.433584] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.239 [2024-10-11 22:52:47.433592] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.433600] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1440) on tqpair=0x1877210 00:29:44.239 [2024-10-11 22:52:47.433621] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:44.239 [2024-10-11 22:52:47.433632] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:44.239 [2024-10-11 22:52:47.433642] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:44.239 [2024-10-11 22:52:47.433663] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.433671] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.433678] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1877210) 00:29:44.239 [2024-10-11 22:52:47.433689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.239 [2024-10-11 22:52:47.433713] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1440, cid 0, qid 0 00:29:44.239 [2024-10-11 22:52:47.433856] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.239 [2024-10-11 22:52:47.433868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.239 [2024-10-11 22:52:47.433875] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.433881] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1440) on tqpair=0x1877210 00:29:44.239 [2024-10-11 22:52:47.433896] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:44.239 [2024-10-11 22:52:47.433909] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:44.239 [2024-10-11 22:52:47.433922] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.433929] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.433935] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1877210) 00:29:44.239 [2024-10-11 22:52:47.433946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.239 [2024-10-11 22:52:47.433967] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1440, cid 0, qid 0 00:29:44.239 [2024-10-11 22:52:47.434046] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.239 [2024-10-11 22:52:47.434060] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.239 [2024-10-11 22:52:47.434067] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.434074] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1440) on tqpair=0x1877210 00:29:44.239 [2024-10-11 22:52:47.434083] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:44.239 [2024-10-11 22:52:47.434097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:44.239 [2024-10-11 22:52:47.434110] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.434117] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.434123] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1877210) 00:29:44.239 [2024-10-11 22:52:47.434133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.239 [2024-10-11 22:52:47.434154] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1440, cid 0, qid 0 00:29:44.239 [2024-10-11 22:52:47.434229] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.239 [2024-10-11 22:52:47.434242] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.239 [2024-10-11 22:52:47.434248] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.434255] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1440) on tqpair=0x1877210 00:29:44.239 [2024-10-11 22:52:47.434264] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:44.239 [2024-10-11 22:52:47.434286] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.434296] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.434302] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1877210) 00:29:44.239 [2024-10-11 22:52:47.434313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.239 [2024-10-11 22:52:47.434333] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1440, cid 0, qid 0 00:29:44.239 [2024-10-11 22:52:47.434425] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.239 [2024-10-11 22:52:47.434437] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.239 [2024-10-11 22:52:47.434444] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.239 [2024-10-11 22:52:47.434450] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1440) on tqpair=0x1877210 00:29:44.239 [2024-10-11 22:52:47.434459] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:44.239 [2024-10-11 22:52:47.434467] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:44.239 [2024-10-11 22:52:47.434484] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:44.239 [2024-10-11 22:52:47.434594] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:44.239 [2024-10-11 22:52:47.434605] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:44.239 [2024-10-11 22:52:47.434618] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.434625] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.434632] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1877210) 00:29:44.240 [2024-10-11 22:52:47.434642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.240 [2024-10-11 22:52:47.434663] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1440, cid 0, qid 0 00:29:44.240 [2024-10-11 22:52:47.434769] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.240 [2024-10-11 22:52:47.434781] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.240 [2024-10-11 22:52:47.434788] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.434794] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1440) on tqpair=0x1877210 00:29:44.240 [2024-10-11 22:52:47.434803] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:44.240 [2024-10-11 22:52:47.434819] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.434827] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.434834] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1877210) 00:29:44.240 [2024-10-11 22:52:47.434844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.240 [2024-10-11 22:52:47.434864] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1440, cid 0, qid 0 00:29:44.240 [2024-10-11 22:52:47.434938] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.240 [2024-10-11 22:52:47.434952] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.240 [2024-10-11 22:52:47.434959] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.434965] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1440) on tqpair=0x1877210 00:29:44.240 [2024-10-11 22:52:47.434973] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:44.240 [2024-10-11 22:52:47.434981] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:44.240 [2024-10-11 22:52:47.434994] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:44.240 [2024-10-11 22:52:47.435008] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:44.240 [2024-10-11 22:52:47.435024] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435031] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1877210) 00:29:44.240 [2024-10-11 22:52:47.435042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.240 [2024-10-11 22:52:47.435063] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1440, cid 0, qid 0 00:29:44.240 [2024-10-11 22:52:47.435167] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.240 [2024-10-11 22:52:47.435180] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.240 [2024-10-11 22:52:47.435187] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435193] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1877210): datao=0, datal=4096, cccid=0 00:29:44.240 [2024-10-11 22:52:47.435201] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18e1440) on tqpair(0x1877210): expected_datao=0, payload_size=4096 00:29:44.240 [2024-10-11 22:52:47.435209] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435225] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435235] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435247] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.240 [2024-10-11 22:52:47.435257] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.240 [2024-10-11 22:52:47.435264] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435270] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1440) on tqpair=0x1877210 00:29:44.240 [2024-10-11 22:52:47.435281] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:44.240 [2024-10-11 22:52:47.435290] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:44.240 [2024-10-11 22:52:47.435297] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:44.240 [2024-10-11 22:52:47.435305] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:44.240 [2024-10-11 22:52:47.435313] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:44.240 [2024-10-11 22:52:47.435321] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:44.240 [2024-10-11 22:52:47.435335] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:44.240 [2024-10-11 22:52:47.435346] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435354] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435360] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1877210) 00:29:44.240 [2024-10-11 22:52:47.435371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:44.240 [2024-10-11 22:52:47.435392] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1440, cid 0, qid 0 00:29:44.240 [2024-10-11 22:52:47.435482] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.240 [2024-10-11 22:52:47.435494] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.240 [2024-10-11 22:52:47.435501] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435508] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1440) on tqpair=0x1877210 00:29:44.240 [2024-10-11 22:52:47.435519] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435526] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435532] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1877210) 00:29:44.240 [2024-10-11 22:52:47.435542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.240 [2024-10-11 22:52:47.435561] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435569] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435575] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1877210) 00:29:44.240 [2024-10-11 22:52:47.435588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.240 [2024-10-11 22:52:47.435599] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435606] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435612] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1877210) 00:29:44.240 [2024-10-11 22:52:47.435620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.240 [2024-10-11 22:52:47.435630] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435636] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435642] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1877210) 00:29:44.240 [2024-10-11 22:52:47.435651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.240 [2024-10-11 22:52:47.435659] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:44.240 [2024-10-11 22:52:47.435678] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:44.240 [2024-10-11 22:52:47.435691] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435698] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1877210) 00:29:44.240 [2024-10-11 22:52:47.435708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.240 [2024-10-11 22:52:47.435730] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1440, cid 0, qid 0 00:29:44.240 [2024-10-11 22:52:47.435741] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e15c0, cid 1, qid 0 00:29:44.240 [2024-10-11 22:52:47.435749] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1740, cid 2, qid 0 00:29:44.240 [2024-10-11 22:52:47.435756] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e18c0, cid 3, qid 0 00:29:44.240 [2024-10-11 22:52:47.435763] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1a40, cid 4, qid 0 00:29:44.240 [2024-10-11 22:52:47.435911] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.240 [2024-10-11 22:52:47.435925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.240 [2024-10-11 22:52:47.435933] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435940] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1a40) on tqpair=0x1877210 00:29:44.240 [2024-10-11 22:52:47.435949] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:44.240 [2024-10-11 22:52:47.435957] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:44.240 [2024-10-11 22:52:47.435974] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.435984] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1877210) 00:29:44.240 [2024-10-11 22:52:47.435994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.240 [2024-10-11 22:52:47.436015] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1a40, cid 4, qid 0 00:29:44.240 [2024-10-11 22:52:47.436121] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.240 [2024-10-11 22:52:47.436136] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.240 [2024-10-11 22:52:47.436143] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.436149] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1877210): datao=0, datal=4096, cccid=4 00:29:44.240 [2024-10-11 22:52:47.436161] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18e1a40) on tqpair(0x1877210): expected_datao=0, payload_size=4096 00:29:44.240 [2024-10-11 22:52:47.436169] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.436186] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.436194] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.478566] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.240 [2024-10-11 22:52:47.478584] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.240 [2024-10-11 22:52:47.478592] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.478599] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1a40) on tqpair=0x1877210 00:29:44.240 [2024-10-11 22:52:47.478618] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:44.240 [2024-10-11 22:52:47.478673] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.240 [2024-10-11 22:52:47.478684] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1877210) 00:29:44.240 [2024-10-11 22:52:47.478696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.241 [2024-10-11 22:52:47.478708] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.241 [2024-10-11 22:52:47.478715] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.241 [2024-10-11 22:52:47.478721] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1877210) 00:29:44.241 [2024-10-11 22:52:47.478730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.241 [2024-10-11 22:52:47.478754] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1a40, cid 4, qid 0 00:29:44.241 [2024-10-11 22:52:47.478765] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1bc0, cid 5, qid 0 00:29:44.241 [2024-10-11 22:52:47.478922] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.241 [2024-10-11 22:52:47.478934] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.241 [2024-10-11 22:52:47.478941] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.241 [2024-10-11 22:52:47.478948] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1877210): datao=0, datal=1024, cccid=4 00:29:44.241 [2024-10-11 22:52:47.478955] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18e1a40) on tqpair(0x1877210): expected_datao=0, payload_size=1024 00:29:44.241 [2024-10-11 22:52:47.478962] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.241 [2024-10-11 22:52:47.478972] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.241 [2024-10-11 22:52:47.478980] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.241 [2024-10-11 22:52:47.478988] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.241 [2024-10-11 22:52:47.478997] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.241 [2024-10-11 22:52:47.479003] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.241 [2024-10-11 22:52:47.479010] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1bc0) on tqpair=0x1877210 00:29:44.504 [2024-10-11 22:52:47.519646] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.504 [2024-10-11 22:52:47.519666] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.504 [2024-10-11 22:52:47.519674] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.519681] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1a40) on tqpair=0x1877210 00:29:44.504 [2024-10-11 22:52:47.519698] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.519707] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1877210) 00:29:44.504 [2024-10-11 22:52:47.519723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.504 [2024-10-11 22:52:47.519754] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1a40, cid 4, qid 0 00:29:44.504 [2024-10-11 22:52:47.519852] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.504 [2024-10-11 22:52:47.519864] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.504 [2024-10-11 22:52:47.519871] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.519877] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1877210): datao=0, datal=3072, cccid=4 00:29:44.504 [2024-10-11 22:52:47.519884] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18e1a40) on tqpair(0x1877210): expected_datao=0, payload_size=3072 00:29:44.504 [2024-10-11 22:52:47.519892] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.519909] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.519918] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.519929] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.504 [2024-10-11 22:52:47.519939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.504 [2024-10-11 22:52:47.519946] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.519952] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1a40) on tqpair=0x1877210 00:29:44.504 [2024-10-11 22:52:47.519966] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.519974] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1877210) 00:29:44.504 [2024-10-11 22:52:47.519984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.504 [2024-10-11 22:52:47.520012] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e1a40, cid 4, qid 0 00:29:44.504 [2024-10-11 22:52:47.520107] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.504 [2024-10-11 22:52:47.520120] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.504 [2024-10-11 22:52:47.520127] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.520134] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1877210): datao=0, datal=8, cccid=4 00:29:44.504 [2024-10-11 22:52:47.520141] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18e1a40) on tqpair(0x1877210): expected_datao=0, payload_size=8 00:29:44.504 [2024-10-11 22:52:47.520148] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.520158] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.520165] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.560641] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.504 [2024-10-11 22:52:47.560660] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.504 [2024-10-11 22:52:47.560668] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.504 [2024-10-11 22:52:47.560674] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1a40) on tqpair=0x1877210 00:29:44.504 ===================================================== 00:29:44.504 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:44.504 ===================================================== 00:29:44.504 Controller Capabilities/Features 00:29:44.504 ================================ 00:29:44.504 Vendor ID: 0000 00:29:44.504 Subsystem Vendor ID: 0000 00:29:44.504 Serial Number: .................... 00:29:44.504 Model Number: ........................................ 00:29:44.504 Firmware Version: 25.01 00:29:44.504 Recommended Arb Burst: 0 00:29:44.504 IEEE OUI Identifier: 00 00 00 00:29:44.504 Multi-path I/O 00:29:44.504 May have multiple subsystem ports: No 00:29:44.504 May have multiple controllers: No 00:29:44.504 Associated with SR-IOV VF: No 00:29:44.504 Max Data Transfer Size: 131072 00:29:44.504 Max Number of Namespaces: 0 00:29:44.504 Max Number of I/O Queues: 1024 00:29:44.504 NVMe Specification Version (VS): 1.3 00:29:44.504 NVMe Specification Version (Identify): 1.3 00:29:44.504 Maximum Queue Entries: 128 00:29:44.504 Contiguous Queues Required: Yes 00:29:44.504 Arbitration Mechanisms Supported 00:29:44.504 Weighted Round Robin: Not Supported 00:29:44.504 Vendor Specific: Not Supported 00:29:44.504 Reset Timeout: 15000 ms 00:29:44.504 Doorbell Stride: 4 bytes 00:29:44.504 NVM Subsystem Reset: Not Supported 00:29:44.504 Command Sets Supported 00:29:44.504 NVM Command Set: Supported 00:29:44.504 Boot Partition: Not Supported 00:29:44.504 Memory Page Size Minimum: 4096 bytes 00:29:44.504 Memory Page Size Maximum: 4096 bytes 00:29:44.504 Persistent Memory Region: Not Supported 00:29:44.504 Optional Asynchronous Events Supported 00:29:44.504 Namespace Attribute Notices: Not Supported 00:29:44.504 Firmware Activation Notices: Not Supported 00:29:44.504 ANA Change Notices: Not Supported 00:29:44.504 PLE Aggregate Log Change Notices: Not Supported 00:29:44.504 LBA Status Info Alert Notices: Not Supported 00:29:44.504 EGE Aggregate Log Change Notices: Not Supported 00:29:44.504 Normal NVM Subsystem Shutdown event: Not Supported 00:29:44.504 Zone Descriptor Change Notices: Not Supported 00:29:44.504 Discovery Log Change Notices: Supported 00:29:44.504 Controller Attributes 00:29:44.504 128-bit Host Identifier: Not Supported 00:29:44.504 Non-Operational Permissive Mode: Not Supported 00:29:44.504 NVM Sets: Not Supported 00:29:44.504 Read Recovery Levels: Not Supported 00:29:44.504 Endurance Groups: Not Supported 00:29:44.504 Predictable Latency Mode: Not Supported 00:29:44.504 Traffic Based Keep ALive: Not Supported 00:29:44.504 Namespace Granularity: Not Supported 00:29:44.504 SQ Associations: Not Supported 00:29:44.504 UUID List: Not Supported 00:29:44.504 Multi-Domain Subsystem: Not Supported 00:29:44.504 Fixed Capacity Management: Not Supported 00:29:44.504 Variable Capacity Management: Not Supported 00:29:44.504 Delete Endurance Group: Not Supported 00:29:44.504 Delete NVM Set: Not Supported 00:29:44.504 Extended LBA Formats Supported: Not Supported 00:29:44.504 Flexible Data Placement Supported: Not Supported 00:29:44.504 00:29:44.504 Controller Memory Buffer Support 00:29:44.504 ================================ 00:29:44.504 Supported: No 00:29:44.504 00:29:44.504 Persistent Memory Region Support 00:29:44.504 ================================ 00:29:44.504 Supported: No 00:29:44.504 00:29:44.504 Admin Command Set Attributes 00:29:44.504 ============================ 00:29:44.504 Security Send/Receive: Not Supported 00:29:44.505 Format NVM: Not Supported 00:29:44.505 Firmware Activate/Download: Not Supported 00:29:44.505 Namespace Management: Not Supported 00:29:44.505 Device Self-Test: Not Supported 00:29:44.505 Directives: Not Supported 00:29:44.505 NVMe-MI: Not Supported 00:29:44.505 Virtualization Management: Not Supported 00:29:44.505 Doorbell Buffer Config: Not Supported 00:29:44.505 Get LBA Status Capability: Not Supported 00:29:44.505 Command & Feature Lockdown Capability: Not Supported 00:29:44.505 Abort Command Limit: 1 00:29:44.505 Async Event Request Limit: 4 00:29:44.505 Number of Firmware Slots: N/A 00:29:44.505 Firmware Slot 1 Read-Only: N/A 00:29:44.505 Firmware Activation Without Reset: N/A 00:29:44.505 Multiple Update Detection Support: N/A 00:29:44.505 Firmware Update Granularity: No Information Provided 00:29:44.505 Per-Namespace SMART Log: No 00:29:44.505 Asymmetric Namespace Access Log Page: Not Supported 00:29:44.505 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:44.505 Command Effects Log Page: Not Supported 00:29:44.505 Get Log Page Extended Data: Supported 00:29:44.505 Telemetry Log Pages: Not Supported 00:29:44.505 Persistent Event Log Pages: Not Supported 00:29:44.505 Supported Log Pages Log Page: May Support 00:29:44.505 Commands Supported & Effects Log Page: Not Supported 00:29:44.505 Feature Identifiers & Effects Log Page:May Support 00:29:44.505 NVMe-MI Commands & Effects Log Page: May Support 00:29:44.505 Data Area 4 for Telemetry Log: Not Supported 00:29:44.505 Error Log Page Entries Supported: 128 00:29:44.505 Keep Alive: Not Supported 00:29:44.505 00:29:44.505 NVM Command Set Attributes 00:29:44.505 ========================== 00:29:44.505 Submission Queue Entry Size 00:29:44.505 Max: 1 00:29:44.505 Min: 1 00:29:44.505 Completion Queue Entry Size 00:29:44.505 Max: 1 00:29:44.505 Min: 1 00:29:44.505 Number of Namespaces: 0 00:29:44.505 Compare Command: Not Supported 00:29:44.505 Write Uncorrectable Command: Not Supported 00:29:44.505 Dataset Management Command: Not Supported 00:29:44.505 Write Zeroes Command: Not Supported 00:29:44.505 Set Features Save Field: Not Supported 00:29:44.505 Reservations: Not Supported 00:29:44.505 Timestamp: Not Supported 00:29:44.505 Copy: Not Supported 00:29:44.505 Volatile Write Cache: Not Present 00:29:44.505 Atomic Write Unit (Normal): 1 00:29:44.505 Atomic Write Unit (PFail): 1 00:29:44.505 Atomic Compare & Write Unit: 1 00:29:44.505 Fused Compare & Write: Supported 00:29:44.505 Scatter-Gather List 00:29:44.505 SGL Command Set: Supported 00:29:44.505 SGL Keyed: Supported 00:29:44.505 SGL Bit Bucket Descriptor: Not Supported 00:29:44.505 SGL Metadata Pointer: Not Supported 00:29:44.505 Oversized SGL: Not Supported 00:29:44.505 SGL Metadata Address: Not Supported 00:29:44.505 SGL Offset: Supported 00:29:44.505 Transport SGL Data Block: Not Supported 00:29:44.505 Replay Protected Memory Block: Not Supported 00:29:44.505 00:29:44.505 Firmware Slot Information 00:29:44.505 ========================= 00:29:44.505 Active slot: 0 00:29:44.505 00:29:44.505 00:29:44.505 Error Log 00:29:44.505 ========= 00:29:44.505 00:29:44.505 Active Namespaces 00:29:44.505 ================= 00:29:44.505 Discovery Log Page 00:29:44.505 ================== 00:29:44.505 Generation Counter: 2 00:29:44.505 Number of Records: 2 00:29:44.505 Record Format: 0 00:29:44.505 00:29:44.505 Discovery Log Entry 0 00:29:44.505 ---------------------- 00:29:44.505 Transport Type: 3 (TCP) 00:29:44.505 Address Family: 1 (IPv4) 00:29:44.505 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:44.505 Entry Flags: 00:29:44.505 Duplicate Returned Information: 1 00:29:44.505 Explicit Persistent Connection Support for Discovery: 1 00:29:44.505 Transport Requirements: 00:29:44.505 Secure Channel: Not Required 00:29:44.505 Port ID: 0 (0x0000) 00:29:44.505 Controller ID: 65535 (0xffff) 00:29:44.505 Admin Max SQ Size: 128 00:29:44.505 Transport Service Identifier: 4420 00:29:44.505 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:44.505 Transport Address: 10.0.0.2 00:29:44.505 Discovery Log Entry 1 00:29:44.505 ---------------------- 00:29:44.505 Transport Type: 3 (TCP) 00:29:44.505 Address Family: 1 (IPv4) 00:29:44.505 Subsystem Type: 2 (NVM Subsystem) 00:29:44.505 Entry Flags: 00:29:44.505 Duplicate Returned Information: 0 00:29:44.505 Explicit Persistent Connection Support for Discovery: 0 00:29:44.505 Transport Requirements: 00:29:44.505 Secure Channel: Not Required 00:29:44.505 Port ID: 0 (0x0000) 00:29:44.505 Controller ID: 65535 (0xffff) 00:29:44.505 Admin Max SQ Size: 128 00:29:44.505 Transport Service Identifier: 4420 00:29:44.505 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:44.505 Transport Address: 10.0.0.2 [2024-10-11 22:52:47.560791] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:44.505 [2024-10-11 22:52:47.560812] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1440) on tqpair=0x1877210 00:29:44.505 [2024-10-11 22:52:47.560824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.505 [2024-10-11 22:52:47.560834] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e15c0) on tqpair=0x1877210 00:29:44.505 [2024-10-11 22:52:47.560841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.505 [2024-10-11 22:52:47.560852] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e1740) on tqpair=0x1877210 00:29:44.505 [2024-10-11 22:52:47.560860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.505 [2024-10-11 22:52:47.560868] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e18c0) on tqpair=0x1877210 00:29:44.505 [2024-10-11 22:52:47.560876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.505 [2024-10-11 22:52:47.560889] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.505 [2024-10-11 22:52:47.560897] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.505 [2024-10-11 22:52:47.560903] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1877210) 00:29:44.505 [2024-10-11 22:52:47.560914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.505 [2024-10-11 22:52:47.560938] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e18c0, cid 3, qid 0 00:29:44.505 [2024-10-11 22:52:47.561043] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.505 [2024-10-11 22:52:47.561057] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.505 [2024-10-11 22:52:47.561065] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.505 [2024-10-11 22:52:47.561071] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e18c0) on tqpair=0x1877210 00:29:44.505 [2024-10-11 22:52:47.561083] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.505 [2024-10-11 22:52:47.561091] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.505 [2024-10-11 22:52:47.561097] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1877210) 00:29:44.505 [2024-10-11 22:52:47.561108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.505 [2024-10-11 22:52:47.561134] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e18c0, cid 3, qid 0 00:29:44.505 [2024-10-11 22:52:47.561228] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.505 [2024-10-11 22:52:47.561239] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.505 [2024-10-11 22:52:47.561246] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.505 [2024-10-11 22:52:47.561253] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e18c0) on tqpair=0x1877210 00:29:44.505 [2024-10-11 22:52:47.561261] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:44.505 [2024-10-11 22:52:47.561274] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:44.505 [2024-10-11 22:52:47.561291] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.505 [2024-10-11 22:52:47.561300] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.505 [2024-10-11 22:52:47.561306] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1877210) 00:29:44.505 [2024-10-11 22:52:47.561316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.505 [2024-10-11 22:52:47.561337] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e18c0, cid 3, qid 0 00:29:44.505 [2024-10-11 22:52:47.561431] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.505 [2024-10-11 22:52:47.561445] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.505 [2024-10-11 22:52:47.561452] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.505 [2024-10-11 22:52:47.561458] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e18c0) on tqpair=0x1877210 00:29:44.505 [2024-10-11 22:52:47.561475] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.505 [2024-10-11 22:52:47.561484] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.505 [2024-10-11 22:52:47.561490] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1877210) 00:29:44.505 [2024-10-11 22:52:47.561507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.505 [2024-10-11 22:52:47.561529] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e18c0, cid 3, qid 0 00:29:44.505 [2024-10-11 22:52:47.561613] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.505 [2024-10-11 22:52:47.561627] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.505 [2024-10-11 22:52:47.561634] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.505 [2024-10-11 22:52:47.561640] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e18c0) on tqpair=0x1877210 00:29:44.505 [2024-10-11 22:52:47.561656] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.561665] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.561672] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1877210) 00:29:44.506 [2024-10-11 22:52:47.561682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.506 [2024-10-11 22:52:47.561703] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e18c0, cid 3, qid 0 00:29:44.506 [2024-10-11 22:52:47.561777] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.506 [2024-10-11 22:52:47.561791] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.506 [2024-10-11 22:52:47.561798] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.561805] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e18c0) on tqpair=0x1877210 00:29:44.506 [2024-10-11 22:52:47.561821] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.561830] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.561837] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1877210) 00:29:44.506 [2024-10-11 22:52:47.561847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.506 [2024-10-11 22:52:47.561867] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e18c0, cid 3, qid 0 00:29:44.506 [2024-10-11 22:52:47.561936] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.506 [2024-10-11 22:52:47.561948] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.506 [2024-10-11 22:52:47.561955] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.561962] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e18c0) on tqpair=0x1877210 00:29:44.506 [2024-10-11 22:52:47.561977] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.561986] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.561992] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1877210) 00:29:44.506 [2024-10-11 22:52:47.562002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.506 [2024-10-11 22:52:47.562022] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e18c0, cid 3, qid 0 00:29:44.506 [2024-10-11 22:52:47.562092] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.506 [2024-10-11 22:52:47.562104] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.506 [2024-10-11 22:52:47.562110] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.562117] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e18c0) on tqpair=0x1877210 00:29:44.506 [2024-10-11 22:52:47.562133] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.562141] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.562148] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1877210) 00:29:44.506 [2024-10-11 22:52:47.562162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.506 [2024-10-11 22:52:47.562183] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e18c0, cid 3, qid 0 00:29:44.506 [2024-10-11 22:52:47.562249] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.506 [2024-10-11 22:52:47.562261] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.506 [2024-10-11 22:52:47.562268] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.562275] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e18c0) on tqpair=0x1877210 00:29:44.506 [2024-10-11 22:52:47.562290] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.562299] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.562305] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1877210) 00:29:44.506 [2024-10-11 22:52:47.562316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.506 [2024-10-11 22:52:47.562336] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e18c0, cid 3, qid 0 00:29:44.506 [2024-10-11 22:52:47.562401] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.506 [2024-10-11 22:52:47.562413] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.506 [2024-10-11 22:52:47.562420] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.562427] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e18c0) on tqpair=0x1877210 00:29:44.506 [2024-10-11 22:52:47.562442] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.562451] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.562458] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1877210) 00:29:44.506 [2024-10-11 22:52:47.562468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.506 [2024-10-11 22:52:47.562488] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e18c0, cid 3, qid 0 00:29:44.506 [2024-10-11 22:52:47.566580] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.506 [2024-10-11 22:52:47.566597] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.506 [2024-10-11 22:52:47.566604] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.566610] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e18c0) on tqpair=0x1877210 00:29:44.506 [2024-10-11 22:52:47.566627] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.566652] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.566658] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1877210) 00:29:44.506 [2024-10-11 22:52:47.566669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.506 [2024-10-11 22:52:47.566691] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18e18c0, cid 3, qid 0 00:29:44.506 [2024-10-11 22:52:47.566826] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.506 [2024-10-11 22:52:47.566839] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.506 [2024-10-11 22:52:47.566846] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.566853] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18e18c0) on tqpair=0x1877210 00:29:44.506 [2024-10-11 22:52:47.566866] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:29:44.506 00:29:44.506 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:44.506 [2024-10-11 22:52:47.598953] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:29:44.506 [2024-10-11 22:52:47.598993] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336859 ] 00:29:44.506 [2024-10-11 22:52:47.629035] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:44.506 [2024-10-11 22:52:47.629082] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:44.506 [2024-10-11 22:52:47.629092] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:44.506 [2024-10-11 22:52:47.629105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:44.506 [2024-10-11 22:52:47.629127] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:44.506 [2024-10-11 22:52:47.632849] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:44.506 [2024-10-11 22:52:47.632888] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d5c210 0 00:29:44.506 [2024-10-11 22:52:47.640571] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:44.506 [2024-10-11 22:52:47.640602] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:44.506 [2024-10-11 22:52:47.640611] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:44.506 [2024-10-11 22:52:47.640617] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:44.506 [2024-10-11 22:52:47.640648] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.640660] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.640667] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5c210) 00:29:44.506 [2024-10-11 22:52:47.640681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:44.506 [2024-10-11 22:52:47.640709] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6440, cid 0, qid 0 00:29:44.506 [2024-10-11 22:52:47.647563] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.506 [2024-10-11 22:52:47.647582] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.506 [2024-10-11 22:52:47.647590] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.647597] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6440) on tqpair=0x1d5c210 00:29:44.506 [2024-10-11 22:52:47.647611] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:44.506 [2024-10-11 22:52:47.647621] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:44.506 [2024-10-11 22:52:47.647631] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:44.506 [2024-10-11 22:52:47.647649] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.647658] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.647664] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5c210) 00:29:44.506 [2024-10-11 22:52:47.647675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.506 [2024-10-11 22:52:47.647700] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6440, cid 0, qid 0 00:29:44.506 [2024-10-11 22:52:47.647826] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.506 [2024-10-11 22:52:47.647839] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.506 [2024-10-11 22:52:47.647851] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.647859] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6440) on tqpair=0x1d5c210 00:29:44.506 [2024-10-11 22:52:47.647867] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:44.506 [2024-10-11 22:52:47.647881] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:44.506 [2024-10-11 22:52:47.647894] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.647902] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.506 [2024-10-11 22:52:47.647909] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5c210) 00:29:44.506 [2024-10-11 22:52:47.647920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.506 [2024-10-11 22:52:47.647942] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6440, cid 0, qid 0 00:29:44.506 [2024-10-11 22:52:47.648026] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.506 [2024-10-11 22:52:47.648041] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.506 [2024-10-11 22:52:47.648048] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648055] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6440) on tqpair=0x1d5c210 00:29:44.507 [2024-10-11 22:52:47.648063] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:44.507 [2024-10-11 22:52:47.648078] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:44.507 [2024-10-11 22:52:47.648091] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648098] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648105] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5c210) 00:29:44.507 [2024-10-11 22:52:47.648115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.507 [2024-10-11 22:52:47.648137] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6440, cid 0, qid 0 00:29:44.507 [2024-10-11 22:52:47.648211] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.507 [2024-10-11 22:52:47.648223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.507 [2024-10-11 22:52:47.648231] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648237] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6440) on tqpair=0x1d5c210 00:29:44.507 [2024-10-11 22:52:47.648246] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:44.507 [2024-10-11 22:52:47.648263] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648272] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648279] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5c210) 00:29:44.507 [2024-10-11 22:52:47.648289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.507 [2024-10-11 22:52:47.648310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6440, cid 0, qid 0 00:29:44.507 [2024-10-11 22:52:47.648383] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.507 [2024-10-11 22:52:47.648396] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.507 [2024-10-11 22:52:47.648403] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648410] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6440) on tqpair=0x1d5c210 00:29:44.507 [2024-10-11 22:52:47.648417] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:44.507 [2024-10-11 22:52:47.648430] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:44.507 [2024-10-11 22:52:47.648444] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:44.507 [2024-10-11 22:52:47.648557] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:44.507 [2024-10-11 22:52:47.648567] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:44.507 [2024-10-11 22:52:47.648579] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648586] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648593] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5c210) 00:29:44.507 [2024-10-11 22:52:47.648604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.507 [2024-10-11 22:52:47.648626] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6440, cid 0, qid 0 00:29:44.507 [2024-10-11 22:52:47.648740] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.507 [2024-10-11 22:52:47.648754] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.507 [2024-10-11 22:52:47.648761] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648768] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6440) on tqpair=0x1d5c210 00:29:44.507 [2024-10-11 22:52:47.648777] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:44.507 [2024-10-11 22:52:47.648793] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648803] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648809] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5c210) 00:29:44.507 [2024-10-11 22:52:47.648820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.507 [2024-10-11 22:52:47.648841] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6440, cid 0, qid 0 00:29:44.507 [2024-10-11 22:52:47.648916] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.507 [2024-10-11 22:52:47.648930] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.507 [2024-10-11 22:52:47.648937] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.648944] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6440) on tqpair=0x1d5c210 00:29:44.507 [2024-10-11 22:52:47.648951] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:44.507 [2024-10-11 22:52:47.648959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:44.507 [2024-10-11 22:52:47.648973] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:44.507 [2024-10-11 22:52:47.648988] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:44.507 [2024-10-11 22:52:47.649002] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649010] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5c210) 00:29:44.507 [2024-10-11 22:52:47.649020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.507 [2024-10-11 22:52:47.649042] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6440, cid 0, qid 0 00:29:44.507 [2024-10-11 22:52:47.649164] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.507 [2024-10-11 22:52:47.649179] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.507 [2024-10-11 22:52:47.649187] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649193] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5c210): datao=0, datal=4096, cccid=0 00:29:44.507 [2024-10-11 22:52:47.649201] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc6440) on tqpair(0x1d5c210): expected_datao=0, payload_size=4096 00:29:44.507 [2024-10-11 22:52:47.649209] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649219] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649227] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649239] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.507 [2024-10-11 22:52:47.649249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.507 [2024-10-11 22:52:47.649256] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649262] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6440) on tqpair=0x1d5c210 00:29:44.507 [2024-10-11 22:52:47.649273] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:44.507 [2024-10-11 22:52:47.649282] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:44.507 [2024-10-11 22:52:47.649289] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:44.507 [2024-10-11 22:52:47.649296] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:44.507 [2024-10-11 22:52:47.649304] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:44.507 [2024-10-11 22:52:47.649312] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:44.507 [2024-10-11 22:52:47.649326] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:44.507 [2024-10-11 22:52:47.649338] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649346] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649352] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5c210) 00:29:44.507 [2024-10-11 22:52:47.649363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:44.507 [2024-10-11 22:52:47.649385] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6440, cid 0, qid 0 00:29:44.507 [2024-10-11 22:52:47.649467] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.507 [2024-10-11 22:52:47.649479] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.507 [2024-10-11 22:52:47.649486] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649493] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6440) on tqpair=0x1d5c210 00:29:44.507 [2024-10-11 22:52:47.649503] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649511] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649517] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5c210) 00:29:44.507 [2024-10-11 22:52:47.649527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.507 [2024-10-11 22:52:47.649537] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649544] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649559] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d5c210) 00:29:44.507 [2024-10-11 22:52:47.649569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.507 [2024-10-11 22:52:47.649583] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649591] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d5c210) 00:29:44.507 [2024-10-11 22:52:47.649606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.507 [2024-10-11 22:52:47.649616] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649623] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649629] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.507 [2024-10-11 22:52:47.649638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.507 [2024-10-11 22:52:47.649647] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:44.507 [2024-10-11 22:52:47.649666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:44.507 [2024-10-11 22:52:47.649679] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.507 [2024-10-11 22:52:47.649686] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d5c210) 00:29:44.507 [2024-10-11 22:52:47.649697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.507 [2024-10-11 22:52:47.649719] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6440, cid 0, qid 0 00:29:44.507 [2024-10-11 22:52:47.649731] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc65c0, cid 1, qid 0 00:29:44.508 [2024-10-11 22:52:47.649739] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6740, cid 2, qid 0 00:29:44.508 [2024-10-11 22:52:47.649746] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.508 [2024-10-11 22:52:47.649754] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6a40, cid 4, qid 0 00:29:44.508 [2024-10-11 22:52:47.649888] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.508 [2024-10-11 22:52:47.649903] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.508 [2024-10-11 22:52:47.649910] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.649917] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6a40) on tqpair=0x1d5c210 00:29:44.508 [2024-10-11 22:52:47.649925] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:44.508 [2024-10-11 22:52:47.649934] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.649952] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.649967] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.649979] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.649986] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.649993] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d5c210) 00:29:44.508 [2024-10-11 22:52:47.650003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:44.508 [2024-10-11 22:52:47.650024] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6a40, cid 4, qid 0 00:29:44.508 [2024-10-11 22:52:47.650131] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.508 [2024-10-11 22:52:47.650145] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.508 [2024-10-11 22:52:47.650152] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650159] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6a40) on tqpair=0x1d5c210 00:29:44.508 [2024-10-11 22:52:47.650226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.650245] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.650260] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650267] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d5c210) 00:29:44.508 [2024-10-11 22:52:47.650278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.508 [2024-10-11 22:52:47.650300] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6a40, cid 4, qid 0 00:29:44.508 [2024-10-11 22:52:47.650384] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.508 [2024-10-11 22:52:47.650397] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.508 [2024-10-11 22:52:47.650404] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650410] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5c210): datao=0, datal=4096, cccid=4 00:29:44.508 [2024-10-11 22:52:47.650418] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc6a40) on tqpair(0x1d5c210): expected_datao=0, payload_size=4096 00:29:44.508 [2024-10-11 22:52:47.650426] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650442] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650451] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650462] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.508 [2024-10-11 22:52:47.650473] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.508 [2024-10-11 22:52:47.650479] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650486] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6a40) on tqpair=0x1d5c210 00:29:44.508 [2024-10-11 22:52:47.650500] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:44.508 [2024-10-11 22:52:47.650521] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.650539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.650563] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650573] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d5c210) 00:29:44.508 [2024-10-11 22:52:47.650584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.508 [2024-10-11 22:52:47.650606] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6a40, cid 4, qid 0 00:29:44.508 [2024-10-11 22:52:47.650713] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.508 [2024-10-11 22:52:47.650728] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.508 [2024-10-11 22:52:47.650735] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650741] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5c210): datao=0, datal=4096, cccid=4 00:29:44.508 [2024-10-11 22:52:47.650749] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc6a40) on tqpair(0x1d5c210): expected_datao=0, payload_size=4096 00:29:44.508 [2024-10-11 22:52:47.650760] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650778] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650787] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650799] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.508 [2024-10-11 22:52:47.650809] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.508 [2024-10-11 22:52:47.650816] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650822] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6a40) on tqpair=0x1d5c210 00:29:44.508 [2024-10-11 22:52:47.650841] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.650860] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.650874] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.650882] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d5c210) 00:29:44.508 [2024-10-11 22:52:47.650893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.508 [2024-10-11 22:52:47.650914] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6a40, cid 4, qid 0 00:29:44.508 [2024-10-11 22:52:47.651002] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.508 [2024-10-11 22:52:47.651014] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.508 [2024-10-11 22:52:47.651021] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.651027] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5c210): datao=0, datal=4096, cccid=4 00:29:44.508 [2024-10-11 22:52:47.651035] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc6a40) on tqpair(0x1d5c210): expected_datao=0, payload_size=4096 00:29:44.508 [2024-10-11 22:52:47.651042] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.651058] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.651067] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.651078] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.508 [2024-10-11 22:52:47.651088] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.508 [2024-10-11 22:52:47.651095] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.651101] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6a40) on tqpair=0x1d5c210 00:29:44.508 [2024-10-11 22:52:47.651113] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.651128] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.651144] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.651155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.651163] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.651172] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.651181] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:44.508 [2024-10-11 22:52:47.651189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:44.508 [2024-10-11 22:52:47.651200] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:44.508 [2024-10-11 22:52:47.651219] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.508 [2024-10-11 22:52:47.651228] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d5c210) 00:29:44.508 [2024-10-11 22:52:47.651239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.508 [2024-10-11 22:52:47.651250] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.651257] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.651263] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d5c210) 00:29:44.509 [2024-10-11 22:52:47.651272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.509 [2024-10-11 22:52:47.651295] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6a40, cid 4, qid 0 00:29:44.509 [2024-10-11 22:52:47.651306] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6bc0, cid 5, qid 0 00:29:44.509 [2024-10-11 22:52:47.654561] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.509 [2024-10-11 22:52:47.654578] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.509 [2024-10-11 22:52:47.654586] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.654593] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6a40) on tqpair=0x1d5c210 00:29:44.509 [2024-10-11 22:52:47.654603] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.509 [2024-10-11 22:52:47.654613] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.509 [2024-10-11 22:52:47.654620] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.654627] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6bc0) on tqpair=0x1d5c210 00:29:44.509 [2024-10-11 22:52:47.654644] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.654653] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d5c210) 00:29:44.509 [2024-10-11 22:52:47.654663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.509 [2024-10-11 22:52:47.654686] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6bc0, cid 5, qid 0 00:29:44.509 [2024-10-11 22:52:47.654811] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.509 [2024-10-11 22:52:47.654823] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.509 [2024-10-11 22:52:47.654831] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.654838] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6bc0) on tqpair=0x1d5c210 00:29:44.509 [2024-10-11 22:52:47.654853] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.654862] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d5c210) 00:29:44.509 [2024-10-11 22:52:47.654873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.509 [2024-10-11 22:52:47.654894] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6bc0, cid 5, qid 0 00:29:44.509 [2024-10-11 22:52:47.654971] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.509 [2024-10-11 22:52:47.654985] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.509 [2024-10-11 22:52:47.654992] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.654999] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6bc0) on tqpair=0x1d5c210 00:29:44.509 [2024-10-11 22:52:47.655015] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655028] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d5c210) 00:29:44.509 [2024-10-11 22:52:47.655040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.509 [2024-10-11 22:52:47.655061] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6bc0, cid 5, qid 0 00:29:44.509 [2024-10-11 22:52:47.655135] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.509 [2024-10-11 22:52:47.655148] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.509 [2024-10-11 22:52:47.655155] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655162] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6bc0) on tqpair=0x1d5c210 00:29:44.509 [2024-10-11 22:52:47.655185] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655196] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d5c210) 00:29:44.509 [2024-10-11 22:52:47.655207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.509 [2024-10-11 22:52:47.655220] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655228] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d5c210) 00:29:44.509 [2024-10-11 22:52:47.655237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.509 [2024-10-11 22:52:47.655249] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655256] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d5c210) 00:29:44.509 [2024-10-11 22:52:47.655266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.509 [2024-10-11 22:52:47.655281] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655290] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d5c210) 00:29:44.509 [2024-10-11 22:52:47.655300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.509 [2024-10-11 22:52:47.655323] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6bc0, cid 5, qid 0 00:29:44.509 [2024-10-11 22:52:47.655334] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6a40, cid 4, qid 0 00:29:44.509 [2024-10-11 22:52:47.655342] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6d40, cid 6, qid 0 00:29:44.509 [2024-10-11 22:52:47.655350] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6ec0, cid 7, qid 0 00:29:44.509 [2024-10-11 22:52:47.655541] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.509 [2024-10-11 22:52:47.655561] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.509 [2024-10-11 22:52:47.655570] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655576] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5c210): datao=0, datal=8192, cccid=5 00:29:44.509 [2024-10-11 22:52:47.655584] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc6bc0) on tqpair(0x1d5c210): expected_datao=0, payload_size=8192 00:29:44.509 [2024-10-11 22:52:47.655591] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655609] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655618] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655630] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.509 [2024-10-11 22:52:47.655641] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.509 [2024-10-11 22:52:47.655651] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655658] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5c210): datao=0, datal=512, cccid=4 00:29:44.509 [2024-10-11 22:52:47.655666] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc6a40) on tqpair(0x1d5c210): expected_datao=0, payload_size=512 00:29:44.509 [2024-10-11 22:52:47.655673] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655683] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655690] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655698] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.509 [2024-10-11 22:52:47.655707] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.509 [2024-10-11 22:52:47.655714] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655720] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5c210): datao=0, datal=512, cccid=6 00:29:44.509 [2024-10-11 22:52:47.655728] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc6d40) on tqpair(0x1d5c210): expected_datao=0, payload_size=512 00:29:44.509 [2024-10-11 22:52:47.655735] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655744] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655751] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655759] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:44.509 [2024-10-11 22:52:47.655768] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:44.509 [2024-10-11 22:52:47.655775] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655781] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5c210): datao=0, datal=4096, cccid=7 00:29:44.509 [2024-10-11 22:52:47.655788] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc6ec0) on tqpair(0x1d5c210): expected_datao=0, payload_size=4096 00:29:44.509 [2024-10-11 22:52:47.655795] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655805] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655812] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655823] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.509 [2024-10-11 22:52:47.655833] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.509 [2024-10-11 22:52:47.655840] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655847] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6bc0) on tqpair=0x1d5c210 00:29:44.509 [2024-10-11 22:52:47.655865] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.509 [2024-10-11 22:52:47.655877] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.509 [2024-10-11 22:52:47.655884] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655890] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6a40) on tqpair=0x1d5c210 00:29:44.509 [2024-10-11 22:52:47.655905] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.509 [2024-10-11 22:52:47.655931] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.509 [2024-10-11 22:52:47.655938] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655945] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6d40) on tqpair=0x1d5c210 00:29:44.509 [2024-10-11 22:52:47.655955] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.509 [2024-10-11 22:52:47.655964] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.509 [2024-10-11 22:52:47.655971] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.509 [2024-10-11 22:52:47.655977] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6ec0) on tqpair=0x1d5c210 00:29:44.509 ===================================================== 00:29:44.509 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:44.509 ===================================================== 00:29:44.509 Controller Capabilities/Features 00:29:44.509 ================================ 00:29:44.509 Vendor ID: 8086 00:29:44.509 Subsystem Vendor ID: 8086 00:29:44.509 Serial Number: SPDK00000000000001 00:29:44.509 Model Number: SPDK bdev Controller 00:29:44.509 Firmware Version: 25.01 00:29:44.509 Recommended Arb Burst: 6 00:29:44.509 IEEE OUI Identifier: e4 d2 5c 00:29:44.509 Multi-path I/O 00:29:44.509 May have multiple subsystem ports: Yes 00:29:44.509 May have multiple controllers: Yes 00:29:44.509 Associated with SR-IOV VF: No 00:29:44.509 Max Data Transfer Size: 131072 00:29:44.509 Max Number of Namespaces: 32 00:29:44.509 Max Number of I/O Queues: 127 00:29:44.509 NVMe Specification Version (VS): 1.3 00:29:44.510 NVMe Specification Version (Identify): 1.3 00:29:44.510 Maximum Queue Entries: 128 00:29:44.510 Contiguous Queues Required: Yes 00:29:44.510 Arbitration Mechanisms Supported 00:29:44.510 Weighted Round Robin: Not Supported 00:29:44.510 Vendor Specific: Not Supported 00:29:44.510 Reset Timeout: 15000 ms 00:29:44.510 Doorbell Stride: 4 bytes 00:29:44.510 NVM Subsystem Reset: Not Supported 00:29:44.510 Command Sets Supported 00:29:44.510 NVM Command Set: Supported 00:29:44.510 Boot Partition: Not Supported 00:29:44.510 Memory Page Size Minimum: 4096 bytes 00:29:44.510 Memory Page Size Maximum: 4096 bytes 00:29:44.510 Persistent Memory Region: Not Supported 00:29:44.510 Optional Asynchronous Events Supported 00:29:44.510 Namespace Attribute Notices: Supported 00:29:44.510 Firmware Activation Notices: Not Supported 00:29:44.510 ANA Change Notices: Not Supported 00:29:44.510 PLE Aggregate Log Change Notices: Not Supported 00:29:44.510 LBA Status Info Alert Notices: Not Supported 00:29:44.510 EGE Aggregate Log Change Notices: Not Supported 00:29:44.510 Normal NVM Subsystem Shutdown event: Not Supported 00:29:44.510 Zone Descriptor Change Notices: Not Supported 00:29:44.510 Discovery Log Change Notices: Not Supported 00:29:44.510 Controller Attributes 00:29:44.510 128-bit Host Identifier: Supported 00:29:44.510 Non-Operational Permissive Mode: Not Supported 00:29:44.510 NVM Sets: Not Supported 00:29:44.510 Read Recovery Levels: Not Supported 00:29:44.510 Endurance Groups: Not Supported 00:29:44.510 Predictable Latency Mode: Not Supported 00:29:44.510 Traffic Based Keep ALive: Not Supported 00:29:44.510 Namespace Granularity: Not Supported 00:29:44.510 SQ Associations: Not Supported 00:29:44.510 UUID List: Not Supported 00:29:44.510 Multi-Domain Subsystem: Not Supported 00:29:44.510 Fixed Capacity Management: Not Supported 00:29:44.510 Variable Capacity Management: Not Supported 00:29:44.510 Delete Endurance Group: Not Supported 00:29:44.510 Delete NVM Set: Not Supported 00:29:44.510 Extended LBA Formats Supported: Not Supported 00:29:44.510 Flexible Data Placement Supported: Not Supported 00:29:44.510 00:29:44.510 Controller Memory Buffer Support 00:29:44.510 ================================ 00:29:44.510 Supported: No 00:29:44.510 00:29:44.510 Persistent Memory Region Support 00:29:44.510 ================================ 00:29:44.510 Supported: No 00:29:44.510 00:29:44.510 Admin Command Set Attributes 00:29:44.510 ============================ 00:29:44.510 Security Send/Receive: Not Supported 00:29:44.510 Format NVM: Not Supported 00:29:44.510 Firmware Activate/Download: Not Supported 00:29:44.510 Namespace Management: Not Supported 00:29:44.510 Device Self-Test: Not Supported 00:29:44.510 Directives: Not Supported 00:29:44.510 NVMe-MI: Not Supported 00:29:44.510 Virtualization Management: Not Supported 00:29:44.510 Doorbell Buffer Config: Not Supported 00:29:44.510 Get LBA Status Capability: Not Supported 00:29:44.510 Command & Feature Lockdown Capability: Not Supported 00:29:44.510 Abort Command Limit: 4 00:29:44.510 Async Event Request Limit: 4 00:29:44.510 Number of Firmware Slots: N/A 00:29:44.510 Firmware Slot 1 Read-Only: N/A 00:29:44.510 Firmware Activation Without Reset: N/A 00:29:44.510 Multiple Update Detection Support: N/A 00:29:44.510 Firmware Update Granularity: No Information Provided 00:29:44.510 Per-Namespace SMART Log: No 00:29:44.510 Asymmetric Namespace Access Log Page: Not Supported 00:29:44.510 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:44.510 Command Effects Log Page: Supported 00:29:44.510 Get Log Page Extended Data: Supported 00:29:44.510 Telemetry Log Pages: Not Supported 00:29:44.510 Persistent Event Log Pages: Not Supported 00:29:44.510 Supported Log Pages Log Page: May Support 00:29:44.510 Commands Supported & Effects Log Page: Not Supported 00:29:44.510 Feature Identifiers & Effects Log Page:May Support 00:29:44.510 NVMe-MI Commands & Effects Log Page: May Support 00:29:44.510 Data Area 4 for Telemetry Log: Not Supported 00:29:44.510 Error Log Page Entries Supported: 128 00:29:44.510 Keep Alive: Supported 00:29:44.510 Keep Alive Granularity: 10000 ms 00:29:44.510 00:29:44.510 NVM Command Set Attributes 00:29:44.510 ========================== 00:29:44.510 Submission Queue Entry Size 00:29:44.510 Max: 64 00:29:44.510 Min: 64 00:29:44.510 Completion Queue Entry Size 00:29:44.510 Max: 16 00:29:44.510 Min: 16 00:29:44.510 Number of Namespaces: 32 00:29:44.510 Compare Command: Supported 00:29:44.510 Write Uncorrectable Command: Not Supported 00:29:44.510 Dataset Management Command: Supported 00:29:44.510 Write Zeroes Command: Supported 00:29:44.510 Set Features Save Field: Not Supported 00:29:44.510 Reservations: Supported 00:29:44.510 Timestamp: Not Supported 00:29:44.510 Copy: Supported 00:29:44.510 Volatile Write Cache: Present 00:29:44.510 Atomic Write Unit (Normal): 1 00:29:44.510 Atomic Write Unit (PFail): 1 00:29:44.510 Atomic Compare & Write Unit: 1 00:29:44.510 Fused Compare & Write: Supported 00:29:44.510 Scatter-Gather List 00:29:44.510 SGL Command Set: Supported 00:29:44.510 SGL Keyed: Supported 00:29:44.510 SGL Bit Bucket Descriptor: Not Supported 00:29:44.510 SGL Metadata Pointer: Not Supported 00:29:44.510 Oversized SGL: Not Supported 00:29:44.510 SGL Metadata Address: Not Supported 00:29:44.510 SGL Offset: Supported 00:29:44.510 Transport SGL Data Block: Not Supported 00:29:44.510 Replay Protected Memory Block: Not Supported 00:29:44.510 00:29:44.510 Firmware Slot Information 00:29:44.510 ========================= 00:29:44.510 Active slot: 1 00:29:44.510 Slot 1 Firmware Revision: 25.01 00:29:44.510 00:29:44.510 00:29:44.510 Commands Supported and Effects 00:29:44.510 ============================== 00:29:44.510 Admin Commands 00:29:44.510 -------------- 00:29:44.510 Get Log Page (02h): Supported 00:29:44.510 Identify (06h): Supported 00:29:44.510 Abort (08h): Supported 00:29:44.510 Set Features (09h): Supported 00:29:44.510 Get Features (0Ah): Supported 00:29:44.510 Asynchronous Event Request (0Ch): Supported 00:29:44.510 Keep Alive (18h): Supported 00:29:44.510 I/O Commands 00:29:44.510 ------------ 00:29:44.510 Flush (00h): Supported LBA-Change 00:29:44.510 Write (01h): Supported LBA-Change 00:29:44.510 Read (02h): Supported 00:29:44.510 Compare (05h): Supported 00:29:44.510 Write Zeroes (08h): Supported LBA-Change 00:29:44.510 Dataset Management (09h): Supported LBA-Change 00:29:44.510 Copy (19h): Supported LBA-Change 00:29:44.510 00:29:44.510 Error Log 00:29:44.510 ========= 00:29:44.510 00:29:44.510 Arbitration 00:29:44.510 =========== 00:29:44.510 Arbitration Burst: 1 00:29:44.510 00:29:44.510 Power Management 00:29:44.510 ================ 00:29:44.510 Number of Power States: 1 00:29:44.510 Current Power State: Power State #0 00:29:44.510 Power State #0: 00:29:44.510 Max Power: 0.00 W 00:29:44.510 Non-Operational State: Operational 00:29:44.510 Entry Latency: Not Reported 00:29:44.510 Exit Latency: Not Reported 00:29:44.510 Relative Read Throughput: 0 00:29:44.510 Relative Read Latency: 0 00:29:44.510 Relative Write Throughput: 0 00:29:44.510 Relative Write Latency: 0 00:29:44.510 Idle Power: Not Reported 00:29:44.510 Active Power: Not Reported 00:29:44.510 Non-Operational Permissive Mode: Not Supported 00:29:44.510 00:29:44.510 Health Information 00:29:44.510 ================== 00:29:44.510 Critical Warnings: 00:29:44.510 Available Spare Space: OK 00:29:44.510 Temperature: OK 00:29:44.510 Device Reliability: OK 00:29:44.510 Read Only: No 00:29:44.510 Volatile Memory Backup: OK 00:29:44.510 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:44.510 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:44.510 Available Spare: 0% 00:29:44.510 Available Spare Threshold: 0% 00:29:44.510 Life Percentage Used:[2024-10-11 22:52:47.656102] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.510 [2024-10-11 22:52:47.656116] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d5c210) 00:29:44.510 [2024-10-11 22:52:47.656128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.510 [2024-10-11 22:52:47.656150] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc6ec0, cid 7, qid 0 00:29:44.510 [2024-10-11 22:52:47.656282] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.510 [2024-10-11 22:52:47.656295] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.510 [2024-10-11 22:52:47.656302] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.510 [2024-10-11 22:52:47.656309] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6ec0) on tqpair=0x1d5c210 00:29:44.510 [2024-10-11 22:52:47.656354] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:44.510 [2024-10-11 22:52:47.656374] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6440) on tqpair=0x1d5c210 00:29:44.510 [2024-10-11 22:52:47.656385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.510 [2024-10-11 22:52:47.656394] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc65c0) on tqpair=0x1d5c210 00:29:44.510 [2024-10-11 22:52:47.656402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.510 [2024-10-11 22:52:47.656410] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc6740) on tqpair=0x1d5c210 00:29:44.510 [2024-10-11 22:52:47.656418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.510 [2024-10-11 22:52:47.656426] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.656434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.511 [2024-10-11 22:52:47.656446] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.656455] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.656461] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.656472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.511 [2024-10-11 22:52:47.656494] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.511 [2024-10-11 22:52:47.656641] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.511 [2024-10-11 22:52:47.656656] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.511 [2024-10-11 22:52:47.656663] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.656670] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.656682] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.656690] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.656696] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.656707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.511 [2024-10-11 22:52:47.656733] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.511 [2024-10-11 22:52:47.656825] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.511 [2024-10-11 22:52:47.656838] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.511 [2024-10-11 22:52:47.656845] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.656852] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.656863] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:44.511 [2024-10-11 22:52:47.656872] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:44.511 [2024-10-11 22:52:47.656888] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.656897] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.656903] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.656914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.511 [2024-10-11 22:52:47.656934] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.511 [2024-10-11 22:52:47.657024] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.511 [2024-10-11 22:52:47.657036] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.511 [2024-10-11 22:52:47.657043] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657049] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.657065] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657074] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657081] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.657091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.511 [2024-10-11 22:52:47.657111] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.511 [2024-10-11 22:52:47.657186] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.511 [2024-10-11 22:52:47.657199] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.511 [2024-10-11 22:52:47.657207] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657213] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.657229] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657239] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657245] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.657256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.511 [2024-10-11 22:52:47.657276] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.511 [2024-10-11 22:52:47.657352] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.511 [2024-10-11 22:52:47.657365] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.511 [2024-10-11 22:52:47.657373] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657380] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.657396] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657405] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657411] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.657422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.511 [2024-10-11 22:52:47.657442] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.511 [2024-10-11 22:52:47.657531] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.511 [2024-10-11 22:52:47.657542] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.511 [2024-10-11 22:52:47.657558] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657570] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.657588] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657597] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.657614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.511 [2024-10-11 22:52:47.657635] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.511 [2024-10-11 22:52:47.657725] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.511 [2024-10-11 22:52:47.657737] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.511 [2024-10-11 22:52:47.657744] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.657766] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657776] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657782] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.657792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.511 [2024-10-11 22:52:47.657813] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.511 [2024-10-11 22:52:47.657886] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.511 [2024-10-11 22:52:47.657898] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.511 [2024-10-11 22:52:47.657905] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657912] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.657927] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657937] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.657943] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.657953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.511 [2024-10-11 22:52:47.657973] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.511 [2024-10-11 22:52:47.658046] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.511 [2024-10-11 22:52:47.658058] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.511 [2024-10-11 22:52:47.658065] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.658072] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.658087] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.658097] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.658103] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.658114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.511 [2024-10-11 22:52:47.658133] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.511 [2024-10-11 22:52:47.658203] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.511 [2024-10-11 22:52:47.658215] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.511 [2024-10-11 22:52:47.658222] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.658229] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.658248] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.658258] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.658265] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.658275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.511 [2024-10-11 22:52:47.658295] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.511 [2024-10-11 22:52:47.658364] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.511 [2024-10-11 22:52:47.658376] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.511 [2024-10-11 22:52:47.658383] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.658390] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.658406] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.658415] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.658421] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.658432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.511 [2024-10-11 22:52:47.658452] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.511 [2024-10-11 22:52:47.658522] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.511 [2024-10-11 22:52:47.658534] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.511 [2024-10-11 22:52:47.658541] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.658547] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.511 [2024-10-11 22:52:47.662579] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.662590] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:44.511 [2024-10-11 22:52:47.662597] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5c210) 00:29:44.511 [2024-10-11 22:52:47.662607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.512 [2024-10-11 22:52:47.662629] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc68c0, cid 3, qid 0 00:29:44.512 [2024-10-11 22:52:47.662751] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:44.512 [2024-10-11 22:52:47.662764] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:44.512 [2024-10-11 22:52:47.662771] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:44.512 [2024-10-11 22:52:47.662778] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc68c0) on tqpair=0x1d5c210 00:29:44.512 [2024-10-11 22:52:47.662791] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:29:44.512 0% 00:29:44.512 Data Units Read: 0 00:29:44.512 Data Units Written: 0 00:29:44.512 Host Read Commands: 0 00:29:44.512 Host Write Commands: 0 00:29:44.512 Controller Busy Time: 0 minutes 00:29:44.512 Power Cycles: 0 00:29:44.512 Power On Hours: 0 hours 00:29:44.512 Unsafe Shutdowns: 0 00:29:44.512 Unrecoverable Media Errors: 0 00:29:44.512 Lifetime Error Log Entries: 0 00:29:44.512 Warning Temperature Time: 0 minutes 00:29:44.512 Critical Temperature Time: 0 minutes 00:29:44.512 00:29:44.512 Number of Queues 00:29:44.512 ================ 00:29:44.512 Number of I/O Submission Queues: 127 00:29:44.512 Number of I/O Completion Queues: 127 00:29:44.512 00:29:44.512 Active Namespaces 00:29:44.512 ================= 00:29:44.512 Namespace ID:1 00:29:44.512 Error Recovery Timeout: Unlimited 00:29:44.512 Command Set Identifier: NVM (00h) 00:29:44.512 Deallocate: Supported 00:29:44.512 Deallocated/Unwritten Error: Not Supported 00:29:44.512 Deallocated Read Value: Unknown 00:29:44.512 Deallocate in Write Zeroes: Not Supported 00:29:44.512 Deallocated Guard Field: 0xFFFF 00:29:44.512 Flush: Supported 00:29:44.512 Reservation: Supported 00:29:44.512 Namespace Sharing Capabilities: Multiple Controllers 00:29:44.512 Size (in LBAs): 131072 (0GiB) 00:29:44.512 Capacity (in LBAs): 131072 (0GiB) 00:29:44.512 Utilization (in LBAs): 131072 (0GiB) 00:29:44.512 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:44.512 EUI64: ABCDEF0123456789 00:29:44.512 UUID: 1a95aecb-a9d6-4423-8fa6-b26667f03dc1 00:29:44.512 Thin Provisioning: Not Supported 00:29:44.512 Per-NS Atomic Units: Yes 00:29:44.512 Atomic Boundary Size (Normal): 0 00:29:44.512 Atomic Boundary Size (PFail): 0 00:29:44.512 Atomic Boundary Offset: 0 00:29:44.512 Maximum Single Source Range Length: 65535 00:29:44.512 Maximum Copy Length: 65535 00:29:44.512 Maximum Source Range Count: 1 00:29:44.512 NGUID/EUI64 Never Reused: No 00:29:44.512 Namespace Write Protected: No 00:29:44.512 Number of LBA Formats: 1 00:29:44.512 Current LBA Format: LBA Format #00 00:29:44.512 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:44.512 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:44.512 rmmod nvme_tcp 00:29:44.512 rmmod nvme_fabrics 00:29:44.512 rmmod nvme_keyring 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 336831 ']' 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 336831 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 336831 ']' 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 336831 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:44.512 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 336831 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 336831' 00:29:44.771 killing process with pid 336831 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 336831 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 336831 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.771 22:52:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.307 22:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.307 00:29:47.307 real 0m5.614s 00:29:47.307 user 0m4.303s 00:29:47.307 sys 0m2.030s 00:29:47.307 22:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:47.307 22:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:47.307 ************************************ 00:29:47.307 END TEST nvmf_identify 00:29:47.307 ************************************ 00:29:47.307 22:52:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:47.307 22:52:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:47.307 22:52:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:47.307 22:52:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.307 ************************************ 00:29:47.307 START TEST nvmf_perf 00:29:47.307 ************************************ 00:29:47.307 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:47.307 * Looking for test storage... 00:29:47.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:47.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.308 --rc genhtml_branch_coverage=1 00:29:47.308 --rc genhtml_function_coverage=1 00:29:47.308 --rc genhtml_legend=1 00:29:47.308 --rc geninfo_all_blocks=1 00:29:47.308 --rc geninfo_unexecuted_blocks=1 00:29:47.308 00:29:47.308 ' 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:47.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.308 --rc genhtml_branch_coverage=1 00:29:47.308 --rc genhtml_function_coverage=1 00:29:47.308 --rc genhtml_legend=1 00:29:47.308 --rc geninfo_all_blocks=1 00:29:47.308 --rc geninfo_unexecuted_blocks=1 00:29:47.308 00:29:47.308 ' 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:47.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.308 --rc genhtml_branch_coverage=1 00:29:47.308 --rc genhtml_function_coverage=1 00:29:47.308 --rc genhtml_legend=1 00:29:47.308 --rc geninfo_all_blocks=1 00:29:47.308 --rc geninfo_unexecuted_blocks=1 00:29:47.308 00:29:47.308 ' 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:47.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.308 --rc genhtml_branch_coverage=1 00:29:47.308 --rc genhtml_function_coverage=1 00:29:47.308 --rc genhtml_legend=1 00:29:47.308 --rc geninfo_all_blocks=1 00:29:47.308 --rc geninfo_unexecuted_blocks=1 00:29:47.308 00:29:47.308 ' 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:47.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.308 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.309 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.309 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:47.309 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:47.309 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.309 22:52:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.211 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:49.212 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:49.212 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:49.212 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:49.212 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:49.212 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.470 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.470 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.470 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:49.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:29:49.471 00:29:49.471 --- 10.0.0.2 ping statistics --- 00:29:49.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.471 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:29:49.471 00:29:49.471 --- 10.0.0.1 ping statistics --- 00:29:49.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.471 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=338914 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 338914 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 338914 ']' 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:49.471 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:49.471 [2024-10-11 22:52:52.606804] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:29:49.471 [2024-10-11 22:52:52.606898] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.471 [2024-10-11 22:52:52.677298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:49.471 [2024-10-11 22:52:52.727105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.471 [2024-10-11 22:52:52.727159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.471 [2024-10-11 22:52:52.727173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.471 [2024-10-11 22:52:52.727183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.471 [2024-10-11 22:52:52.727192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.471 [2024-10-11 22:52:52.728746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.471 [2024-10-11 22:52:52.728805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.471 [2024-10-11 22:52:52.728828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:49.471 [2024-10-11 22:52:52.728835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.729 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:49.729 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:29:49.729 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:49.729 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:49.729 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:49.729 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.729 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:49.729 22:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:53.008 22:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:53.008 22:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:53.008 22:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:53.008 22:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:53.573 22:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:53.573 22:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:53.573 22:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:53.573 22:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:53.573 22:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:53.830 [2024-10-11 22:52:56.871955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.830 22:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.088 22:52:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:54.088 22:52:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:54.345 22:52:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:54.345 22:52:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:54.603 22:52:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.860 [2024-10-11 22:52:57.975921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.860 22:52:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:55.118 22:52:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:55.118 22:52:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:55.118 22:52:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:55.118 22:52:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:56.490 Initializing NVMe Controllers 00:29:56.490 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:56.490 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:56.490 Initialization complete. Launching workers. 00:29:56.490 ======================================================== 00:29:56.490 Latency(us) 00:29:56.490 Device Information : IOPS MiB/s Average min max 00:29:56.490 PCIE (0000:88:00.0) NSID 1 from core 0: 85002.99 332.04 375.91 31.77 5311.49 00:29:56.490 ======================================================== 00:29:56.490 Total : 85002.99 332.04 375.91 31.77 5311.49 00:29:56.490 00:29:56.490 22:52:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:57.862 Initializing NVMe Controllers 00:29:57.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:57.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:57.862 Initialization complete. Launching workers. 00:29:57.862 ======================================================== 00:29:57.862 Latency(us) 00:29:57.862 Device Information : IOPS MiB/s Average min max 00:29:57.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.00 0.29 13526.90 156.39 44824.99 00:29:57.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.00 0.23 16740.93 7756.94 51859.91 00:29:57.862 ======================================================== 00:29:57.862 Total : 135.00 0.53 14955.36 156.39 51859.91 00:29:57.862 00:29:57.862 22:53:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.235 Initializing NVMe Controllers 00:29:59.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:59.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:59.235 Initialization complete. Launching workers. 00:29:59.235 ======================================================== 00:29:59.235 Latency(us) 00:29:59.235 Device Information : IOPS MiB/s Average min max 00:29:59.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8255.95 32.25 3892.38 740.10 7854.79 00:29:59.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3878.98 15.15 8288.06 5961.49 16094.77 00:29:59.235 ======================================================== 00:29:59.235 Total : 12134.93 47.40 5297.47 740.10 16094.77 00:29:59.235 00:29:59.235 22:53:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:59.235 22:53:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:59.235 22:53:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:01.764 Initializing NVMe Controllers 00:30:01.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.764 Controller IO queue size 128, less than required. 00:30:01.764 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.764 Controller IO queue size 128, less than required. 00:30:01.764 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:01.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:01.765 Initialization complete. Launching workers. 00:30:01.765 ======================================================== 00:30:01.765 Latency(us) 00:30:01.765 Device Information : IOPS MiB/s Average min max 00:30:01.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1695.73 423.93 76804.03 45700.98 138043.45 00:30:01.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 576.55 144.14 228523.69 99959.35 385925.36 00:30:01.765 ======================================================== 00:30:01.765 Total : 2272.28 568.07 115300.07 45700.98 385925.36 00:30:01.765 00:30:01.765 22:53:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:02.022 No valid NVMe controllers or AIO or URING devices found 00:30:02.022 Initializing NVMe Controllers 00:30:02.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.022 Controller IO queue size 128, less than required. 00:30:02.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.022 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:02.022 Controller IO queue size 128, less than required. 00:30:02.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.022 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:02.022 WARNING: Some requested NVMe devices were skipped 00:30:02.022 22:53:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:04.550 Initializing NVMe Controllers 00:30:04.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.550 Controller IO queue size 128, less than required. 00:30:04.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:04.550 Controller IO queue size 128, less than required. 00:30:04.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:04.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:04.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:04.550 Initialization complete. Launching workers. 00:30:04.550 00:30:04.550 ==================== 00:30:04.550 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:04.550 TCP transport: 00:30:04.550 polls: 9052 00:30:04.550 idle_polls: 5829 00:30:04.550 sock_completions: 3223 00:30:04.550 nvme_completions: 5965 00:30:04.550 submitted_requests: 8936 00:30:04.550 queued_requests: 1 00:30:04.550 00:30:04.550 ==================== 00:30:04.550 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:04.550 TCP transport: 00:30:04.550 polls: 12079 00:30:04.550 idle_polls: 8479 00:30:04.550 sock_completions: 3600 00:30:04.550 nvme_completions: 6483 00:30:04.550 submitted_requests: 9658 00:30:04.550 queued_requests: 1 00:30:04.550 ======================================================== 00:30:04.550 Latency(us) 00:30:04.550 Device Information : IOPS MiB/s Average min max 00:30:04.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1487.94 371.99 88227.64 50573.90 141981.00 00:30:04.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1617.18 404.29 79463.80 40883.26 109077.64 00:30:04.550 ======================================================== 00:30:04.550 Total : 3105.12 776.28 83663.34 40883.26 141981.00 00:30:04.550 00:30:04.550 22:53:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:04.551 22:53:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:04.808 22:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:04.808 22:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:04.808 22:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=12abf83b-0a9e-499e-a4a5-2d85f67a438e 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 12abf83b-0a9e-499e-a4a5-2d85f67a438e 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=12abf83b-0a9e-499e-a4a5-2d85f67a438e 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:08.990 { 00:30:08.990 "uuid": "12abf83b-0a9e-499e-a4a5-2d85f67a438e", 00:30:08.990 "name": "lvs_0", 00:30:08.990 "base_bdev": "Nvme0n1", 00:30:08.990 "total_data_clusters": 238234, 00:30:08.990 "free_clusters": 238234, 00:30:08.990 "block_size": 512, 00:30:08.990 "cluster_size": 4194304 00:30:08.990 } 00:30:08.990 ]' 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="12abf83b-0a9e-499e-a4a5-2d85f67a438e") .free_clusters' 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="12abf83b-0a9e-499e-a4a5-2d85f67a438e") .cluster_size' 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:30:08.990 952936 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:08.990 22:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 12abf83b-0a9e-499e-a4a5-2d85f67a438e lbd_0 20480 00:30:09.248 22:53:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=f52952d5-d9ca-40d8-b6e3-a5a3e8b8a616 00:30:09.248 22:53:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore f52952d5-d9ca-40d8-b6e3-a5a3e8b8a616 lvs_n_0 00:30:10.180 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=7f409513-ee3c-44e6-8e12-d6382640c7d3 00:30:10.180 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 7f409513-ee3c-44e6-8e12-d6382640c7d3 00:30:10.180 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=7f409513-ee3c-44e6-8e12-d6382640c7d3 00:30:10.180 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:10.180 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:10.180 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:10.180 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:10.180 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:10.180 { 00:30:10.180 "uuid": "12abf83b-0a9e-499e-a4a5-2d85f67a438e", 00:30:10.180 "name": "lvs_0", 00:30:10.180 "base_bdev": "Nvme0n1", 00:30:10.180 "total_data_clusters": 238234, 00:30:10.180 "free_clusters": 233114, 00:30:10.180 "block_size": 512, 00:30:10.180 "cluster_size": 4194304 00:30:10.180 }, 00:30:10.180 { 00:30:10.180 "uuid": "7f409513-ee3c-44e6-8e12-d6382640c7d3", 00:30:10.180 "name": "lvs_n_0", 00:30:10.180 "base_bdev": "f52952d5-d9ca-40d8-b6e3-a5a3e8b8a616", 00:30:10.180 "total_data_clusters": 5114, 00:30:10.180 "free_clusters": 5114, 00:30:10.180 "block_size": 512, 00:30:10.180 "cluster_size": 4194304 00:30:10.180 } 00:30:10.180 ]' 00:30:10.180 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="7f409513-ee3c-44e6-8e12-d6382640c7d3") .free_clusters' 00:30:10.180 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:10.180 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="7f409513-ee3c-44e6-8e12-d6382640c7d3") .cluster_size' 00:30:10.437 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:10.437 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:10.437 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:10.438 20456 00:30:10.438 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:10.438 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7f409513-ee3c-44e6-8e12-d6382640c7d3 lbd_nest_0 20456 00:30:10.695 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=731ed7b4-c8bc-4d21-b8af-9cb63231d28e 00:30:10.695 22:53:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:10.953 22:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:10.953 22:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 731ed7b4-c8bc-4d21-b8af-9cb63231d28e 00:30:11.211 22:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.469 22:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:11.469 22:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:11.469 22:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:11.469 22:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:11.469 22:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:23.666 Initializing NVMe Controllers 00:30:23.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.666 Initialization complete. Launching workers. 00:30:23.666 ======================================================== 00:30:23.666 Latency(us) 00:30:23.666 Device Information : IOPS MiB/s Average min max 00:30:23.666 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.48 0.02 20693.38 165.74 45798.27 00:30:23.666 ======================================================== 00:30:23.666 Total : 48.48 0.02 20693.38 165.74 45798.27 00:30:23.666 00:30:23.666 22:53:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:23.666 22:53:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:33.628 Initializing NVMe Controllers 00:30:33.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:33.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:33.628 Initialization complete. Launching workers. 00:30:33.628 ======================================================== 00:30:33.628 Latency(us) 00:30:33.628 Device Information : IOPS MiB/s Average min max 00:30:33.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.30 9.41 13286.96 5023.80 47901.00 00:30:33.628 ======================================================== 00:30:33.628 Total : 75.30 9.41 13286.96 5023.80 47901.00 00:30:33.628 00:30:33.628 22:53:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:33.628 22:53:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:33.628 22:53:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:43.590 Initializing NVMe Controllers 00:30:43.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:43.590 Initialization complete. Launching workers. 00:30:43.590 ======================================================== 00:30:43.590 Latency(us) 00:30:43.590 Device Information : IOPS MiB/s Average min max 00:30:43.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7628.59 3.72 4202.95 280.40 47821.10 00:30:43.590 ======================================================== 00:30:43.590 Total : 7628.59 3.72 4202.95 280.40 47821.10 00:30:43.590 00:30:43.590 22:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:43.590 22:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:53.556 Initializing NVMe Controllers 00:30:53.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:53.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:53.556 Initialization complete. Launching workers. 00:30:53.556 ======================================================== 00:30:53.556 Latency(us) 00:30:53.556 Device Information : IOPS MiB/s Average min max 00:30:53.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3946.01 493.25 8109.66 785.82 17231.91 00:30:53.556 ======================================================== 00:30:53.556 Total : 3946.01 493.25 8109.66 785.82 17231.91 00:30:53.556 00:30:53.556 22:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:53.556 22:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:53.556 22:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.520 Initializing NVMe Controllers 00:31:03.520 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.520 Controller IO queue size 128, less than required. 00:31:03.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:03.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:03.520 Initialization complete. Launching workers. 00:31:03.520 ======================================================== 00:31:03.520 Latency(us) 00:31:03.520 Device Information : IOPS MiB/s Average min max 00:31:03.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11715.10 5.72 10932.08 1747.40 26130.09 00:31:03.520 ======================================================== 00:31:03.520 Total : 11715.10 5.72 10932.08 1747.40 26130.09 00:31:03.520 00:31:03.520 22:54:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:03.520 22:54:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:13.488 Initializing NVMe Controllers 00:31:13.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:13.488 Controller IO queue size 128, less than required. 00:31:13.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:13.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:13.488 Initialization complete. Launching workers. 00:31:13.488 ======================================================== 00:31:13.488 Latency(us) 00:31:13.488 Device Information : IOPS MiB/s Average min max 00:31:13.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1180.59 147.57 108462.96 15744.90 230545.31 00:31:13.488 ======================================================== 00:31:13.488 Total : 1180.59 147.57 108462.96 15744.90 230545.31 00:31:13.488 00:31:13.488 22:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:13.746 22:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 731ed7b4-c8bc-4d21-b8af-9cb63231d28e 00:31:14.311 22:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:14.569 22:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f52952d5-d9ca-40d8-b6e3-a5a3e8b8a616 00:31:15.134 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:15.392 rmmod nvme_tcp 00:31:15.392 rmmod nvme_fabrics 00:31:15.392 rmmod nvme_keyring 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 338914 ']' 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 338914 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 338914 ']' 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 338914 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 338914 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 338914' 00:31:15.392 killing process with pid 338914 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 338914 00:31:15.392 22:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 338914 00:31:17.293 22:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:17.293 22:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:17.293 22:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:17.293 22:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:17.293 22:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:31:17.293 22:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:17.293 22:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:31:17.293 22:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:17.293 22:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:17.293 22:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.293 22:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.293 22:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:19.200 00:31:19.200 real 1m32.065s 00:31:19.200 user 5m41.662s 00:31:19.200 sys 0m15.395s 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:19.200 ************************************ 00:31:19.200 END TEST nvmf_perf 00:31:19.200 ************************************ 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.200 ************************************ 00:31:19.200 START TEST nvmf_fio_host 00:31:19.200 ************************************ 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:19.200 * Looking for test storage... 00:31:19.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:19.200 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:19.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.201 --rc genhtml_branch_coverage=1 00:31:19.201 --rc genhtml_function_coverage=1 00:31:19.201 --rc genhtml_legend=1 00:31:19.201 --rc geninfo_all_blocks=1 00:31:19.201 --rc geninfo_unexecuted_blocks=1 00:31:19.201 00:31:19.201 ' 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:19.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.201 --rc genhtml_branch_coverage=1 00:31:19.201 --rc genhtml_function_coverage=1 00:31:19.201 --rc genhtml_legend=1 00:31:19.201 --rc geninfo_all_blocks=1 00:31:19.201 --rc geninfo_unexecuted_blocks=1 00:31:19.201 00:31:19.201 ' 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:19.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.201 --rc genhtml_branch_coverage=1 00:31:19.201 --rc genhtml_function_coverage=1 00:31:19.201 --rc genhtml_legend=1 00:31:19.201 --rc geninfo_all_blocks=1 00:31:19.201 --rc geninfo_unexecuted_blocks=1 00:31:19.201 00:31:19.201 ' 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:19.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.201 --rc genhtml_branch_coverage=1 00:31:19.201 --rc genhtml_function_coverage=1 00:31:19.201 --rc genhtml_legend=1 00:31:19.201 --rc geninfo_all_blocks=1 00:31:19.201 --rc geninfo_unexecuted_blocks=1 00:31:19.201 00:31:19.201 ' 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:19.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:19.201 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:19.202 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.202 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:19.202 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:19.202 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:19.202 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.202 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.202 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.202 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:19.202 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:19.202 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:19.202 22:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:21.734 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:21.734 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:21.734 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:21.734 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:21.735 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:31:21.735 00:31:21.735 --- 10.0.0.2 ping statistics --- 00:31:21.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.735 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:31:21.735 00:31:21.735 --- 10.0.0.1 ping statistics --- 00:31:21.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.735 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=351625 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 351625 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 351625 ']' 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.735 [2024-10-11 22:54:24.735059] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:31:21.735 [2024-10-11 22:54:24.735147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.735 [2024-10-11 22:54:24.798972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:21.735 [2024-10-11 22:54:24.845676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.735 [2024-10-11 22:54:24.845728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.735 [2024-10-11 22:54:24.845751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.735 [2024-10-11 22:54:24.845762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.735 [2024-10-11 22:54:24.845771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.735 [2024-10-11 22:54:24.847349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.735 [2024-10-11 22:54:24.847405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.735 [2024-10-11 22:54:24.847471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:21.735 [2024-10-11 22:54:24.847474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:31:21.735 22:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:21.993 [2024-10-11 22:54:25.226979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.993 22:54:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:21.993 22:54:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.993 22:54:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.251 22:54:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:22.508 Malloc1 00:31:22.508 22:54:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:22.779 22:54:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:23.042 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.299 [2024-10-11 22:54:26.418804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.299 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:23.557 22:54:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:23.814 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:23.814 fio-3.35 00:31:23.814 Starting 1 thread 00:31:26.422 00:31:26.422 test: (groupid=0, jobs=1): err= 0: pid=351982: Fri Oct 11 22:54:29 2024 00:31:26.422 read: IOPS=8615, BW=33.7MiB/s (35.3MB/s)(67.5MiB/2007msec) 00:31:26.422 slat (usec): min=2, max=169, avg= 2.65, stdev= 2.05 00:31:26.422 clat (usec): min=2685, max=13026, avg=8057.79, stdev=680.65 00:31:26.422 lat (usec): min=2720, max=13028, avg=8060.44, stdev=680.54 00:31:26.422 clat percentiles (usec): 00:31:26.422 | 1.00th=[ 6521], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 7504], 00:31:26.422 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8225], 00:31:26.422 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 9110], 00:31:26.422 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[10683], 99.95th=[12256], 00:31:26.422 | 99.99th=[13042] 00:31:26.422 bw ( KiB/s): min=33216, max=34984, per=99.96%, avg=34448.00, stdev=826.42, samples=4 00:31:26.422 iops : min= 8304, max= 8746, avg=8612.00, stdev=206.60, samples=4 00:31:26.422 write: IOPS=8612, BW=33.6MiB/s (35.3MB/s)(67.5MiB/2007msec); 0 zone resets 00:31:26.422 slat (usec): min=2, max=158, avg= 2.74, stdev= 1.61 00:31:26.422 clat (usec): min=1470, max=12196, avg=6693.01, stdev=561.11 00:31:26.422 lat (usec): min=1479, max=12199, avg=6695.75, stdev=561.05 00:31:26.422 clat percentiles (usec): 00:31:26.422 | 1.00th=[ 5342], 5.00th=[ 5866], 10.00th=[ 5997], 20.00th=[ 6259], 00:31:26.422 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:31:26.422 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7373], 95.00th=[ 7504], 00:31:26.422 | 99.00th=[ 7898], 99.50th=[ 8029], 99.90th=[10552], 99.95th=[10814], 00:31:26.422 | 99.99th=[12125] 00:31:26.422 bw ( KiB/s): min=34136, max=34688, per=100.00%, avg=34454.00, stdev=253.10, samples=4 00:31:26.422 iops : min= 8534, max= 8672, avg=8613.50, stdev=63.27, samples=4 00:31:26.422 lat (msec) : 2=0.02%, 4=0.12%, 10=99.65%, 20=0.22% 00:31:26.422 cpu : usr=66.45%, sys=31.95%, ctx=76, majf=0, minf=36 00:31:26.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:26.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:26.422 issued rwts: total=17292,17286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:26.422 00:31:26.422 Run status group 0 (all jobs): 00:31:26.422 READ: bw=33.7MiB/s (35.3MB/s), 33.7MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=67.5MiB (70.8MB), run=2007-2007msec 00:31:26.422 WRITE: bw=33.6MiB/s (35.3MB/s), 33.6MiB/s-33.6MiB/s (35.3MB/s-35.3MB/s), io=67.5MiB (70.8MB), run=2007-2007msec 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:26.422 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:26.423 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.423 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:26.423 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:26.423 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:26.423 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:26.423 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:26.423 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:26.423 22:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:26.423 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:26.423 fio-3.35 00:31:26.423 Starting 1 thread 00:31:29.025 00:31:29.025 test: (groupid=0, jobs=1): err= 0: pid=352329: Fri Oct 11 22:54:31 2024 00:31:29.025 read: IOPS=8191, BW=128MiB/s (134MB/s)(257MiB/2006msec) 00:31:29.025 slat (nsec): min=2923, max=99786, avg=3825.23, stdev=2086.54 00:31:29.025 clat (usec): min=2000, max=54866, avg=9185.75, stdev=4159.42 00:31:29.025 lat (usec): min=2003, max=54870, avg=9189.58, stdev=4159.46 00:31:29.025 clat percentiles (usec): 00:31:29.025 | 1.00th=[ 4752], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 6980], 00:31:29.025 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9372], 00:31:29.025 | 70.00th=[10028], 80.00th=[10814], 90.00th=[11731], 95.00th=[12780], 00:31:29.025 | 99.00th=[15664], 99.50th=[47449], 99.90th=[53740], 99.95th=[54264], 00:31:29.025 | 99.99th=[54789] 00:31:29.025 bw ( KiB/s): min=57728, max=71520, per=50.24%, avg=65856.00, stdev=6417.31, samples=4 00:31:29.025 iops : min= 3608, max= 4470, avg=4116.00, stdev=401.08, samples=4 00:31:29.025 write: IOPS=4717, BW=73.7MiB/s (77.3MB/s)(135MiB/1826msec); 0 zone resets 00:31:29.025 slat (usec): min=30, max=184, avg=34.72, stdev= 6.41 00:31:29.025 clat (usec): min=5649, max=21203, avg=11661.91, stdev=2048.63 00:31:29.025 lat (usec): min=5696, max=21248, avg=11696.64, stdev=2048.61 00:31:29.025 clat percentiles (usec): 00:31:29.025 | 1.00th=[ 7898], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[ 9896], 00:31:29.025 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:31:29.025 | 70.00th=[12518], 80.00th=[13435], 90.00th=[14615], 95.00th=[15401], 00:31:29.025 | 99.00th=[16909], 99.50th=[17171], 99.90th=[19006], 99.95th=[20055], 00:31:29.025 | 99.99th=[21103] 00:31:29.025 bw ( KiB/s): min=60224, max=74848, per=90.84%, avg=68568.00, stdev=6710.16, samples=4 00:31:29.025 iops : min= 3764, max= 4678, avg=4285.50, stdev=419.38, samples=4 00:31:29.025 lat (msec) : 4=0.08%, 10=52.40%, 20=46.99%, 50=0.27%, 100=0.25% 00:31:29.025 cpu : usr=77.47%, sys=21.34%, ctx=36, majf=0, minf=55 00:31:29.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:31:29.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:29.026 issued rwts: total=16433,8614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:29.026 00:31:29.026 Run status group 0 (all jobs): 00:31:29.026 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (269MB), run=2006-2006msec 00:31:29.026 WRITE: bw=73.7MiB/s (77.3MB/s), 73.7MiB/s-73.7MiB/s (77.3MB/s-77.3MB/s), io=135MiB (141MB), run=1826-1826msec 00:31:29.026 22:54:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:29.026 22:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:29.026 22:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:29.026 22:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:29.026 22:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:29.026 22:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:31:29.026 22:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:29.026 22:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:29.026 22:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:29.283 22:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:29.283 22:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:31:29.283 22:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:32.563 Nvme0n1 00:31:32.563 22:54:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:35.088 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=469e199d-d258-47a7-97e8-12d69cf8a5e0 00:31:35.088 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 469e199d-d258-47a7-97e8-12d69cf8a5e0 00:31:35.088 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=469e199d-d258-47a7-97e8-12d69cf8a5e0 00:31:35.088 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:35.088 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:35.088 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:35.089 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:35.653 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:35.653 { 00:31:35.653 "uuid": "469e199d-d258-47a7-97e8-12d69cf8a5e0", 00:31:35.653 "name": "lvs_0", 00:31:35.653 "base_bdev": "Nvme0n1", 00:31:35.653 "total_data_clusters": 930, 00:31:35.653 "free_clusters": 930, 00:31:35.653 "block_size": 512, 00:31:35.653 "cluster_size": 1073741824 00:31:35.653 } 00:31:35.653 ]' 00:31:35.653 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="469e199d-d258-47a7-97e8-12d69cf8a5e0") .free_clusters' 00:31:35.653 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:35.653 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="469e199d-d258-47a7-97e8-12d69cf8a5e0") .cluster_size' 00:31:35.653 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:35.653 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:35.653 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:35.653 952320 00:31:35.653 22:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:35.911 0eb5daed-513d-4f5e-89ff-3c8cdda41dd9 00:31:35.911 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:36.169 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:36.734 22:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:36.992 22:54:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:36.992 22:54:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:36.992 22:54:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:36.992 22:54:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:36.992 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:36.992 fio-3.35 00:31:36.992 Starting 1 thread 00:31:39.519 00:31:39.519 test: (groupid=0, jobs=1): err= 0: pid=353739: Fri Oct 11 22:54:42 2024 00:31:39.519 read: IOPS=5897, BW=23.0MiB/s (24.2MB/s)(46.3MiB/2008msec) 00:31:39.519 slat (nsec): min=1984, max=152030, avg=2613.27, stdev=2089.40 00:31:39.519 clat (usec): min=1233, max=171207, avg=11815.21, stdev=11729.17 00:31:39.519 lat (usec): min=1236, max=171251, avg=11817.83, stdev=11729.43 00:31:39.519 clat percentiles (msec): 00:31:39.519 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:31:39.519 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:31:39.519 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:31:39.519 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:39.519 | 99.99th=[ 171] 00:31:39.519 bw ( KiB/s): min=16592, max=26064, per=99.91%, avg=23568.00, stdev=4653.13, samples=4 00:31:39.519 iops : min= 4148, max= 6516, avg=5892.00, stdev=1163.28, samples=4 00:31:39.519 write: IOPS=5895, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2008msec); 0 zone resets 00:31:39.519 slat (usec): min=2, max=101, avg= 2.71, stdev= 1.49 00:31:39.519 clat (usec): min=242, max=169382, avg=9769.53, stdev=10997.12 00:31:39.519 lat (usec): min=244, max=169387, avg=9772.24, stdev=10997.34 00:31:39.519 clat percentiles (msec): 00:31:39.519 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:31:39.519 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:31:39.519 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:31:39.519 | 99.00th=[ 12], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:31:39.519 | 99.99th=[ 169] 00:31:39.519 bw ( KiB/s): min=17640, max=25728, per=99.81%, avg=23540.00, stdev=3937.59, samples=4 00:31:39.519 iops : min= 4410, max= 6432, avg=5885.00, stdev=984.40, samples=4 00:31:39.519 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:39.519 lat (msec) : 2=0.03%, 4=0.13%, 10=53.37%, 20=45.91%, 250=0.54% 00:31:39.519 cpu : usr=62.43%, sys=36.27%, ctx=139, majf=0, minf=36 00:31:39.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:39.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:39.519 issued rwts: total=11842,11839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:39.519 00:31:39.519 Run status group 0 (all jobs): 00:31:39.519 READ: bw=23.0MiB/s (24.2MB/s), 23.0MiB/s-23.0MiB/s (24.2MB/s-24.2MB/s), io=46.3MiB (48.5MB), run=2008-2008msec 00:31:39.519 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.5MB), run=2008-2008msec 00:31:39.519 22:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:39.777 22:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:41.149 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=551d03b5-dfe2-4eb1-81c5-2fdbf462cb60 00:31:41.149 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 551d03b5-dfe2-4eb1-81c5-2fdbf462cb60 00:31:41.149 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=551d03b5-dfe2-4eb1-81c5-2fdbf462cb60 00:31:41.149 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:41.149 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:41.149 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:41.149 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:41.407 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:41.407 { 00:31:41.407 "uuid": "469e199d-d258-47a7-97e8-12d69cf8a5e0", 00:31:41.407 "name": "lvs_0", 00:31:41.407 "base_bdev": "Nvme0n1", 00:31:41.407 "total_data_clusters": 930, 00:31:41.407 "free_clusters": 0, 00:31:41.407 "block_size": 512, 00:31:41.407 "cluster_size": 1073741824 00:31:41.407 }, 00:31:41.407 { 00:31:41.407 "uuid": "551d03b5-dfe2-4eb1-81c5-2fdbf462cb60", 00:31:41.407 "name": "lvs_n_0", 00:31:41.407 "base_bdev": "0eb5daed-513d-4f5e-89ff-3c8cdda41dd9", 00:31:41.407 "total_data_clusters": 237847, 00:31:41.407 "free_clusters": 237847, 00:31:41.407 "block_size": 512, 00:31:41.407 "cluster_size": 4194304 00:31:41.407 } 00:31:41.407 ]' 00:31:41.407 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="551d03b5-dfe2-4eb1-81c5-2fdbf462cb60") .free_clusters' 00:31:41.407 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:41.407 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="551d03b5-dfe2-4eb1-81c5-2fdbf462cb60") .cluster_size' 00:31:41.407 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:41.407 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:41.407 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:41.407 951388 00:31:41.407 22:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:41.971 ad0367af-be4d-4a9c-b9b2-ff277af25c0d 00:31:41.971 22:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:42.229 22:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:42.487 22:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:43.052 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:43.052 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:43.052 fio-3.35 00:31:43.052 Starting 1 thread 00:31:45.578 00:31:45.578 test: (groupid=0, jobs=1): err= 0: pid=354468: Fri Oct 11 22:54:48 2024 00:31:45.578 read: IOPS=5667, BW=22.1MiB/s (23.2MB/s)(44.5MiB/2009msec) 00:31:45.578 slat (nsec): min=1995, max=155804, avg=2591.68, stdev=2088.33 00:31:45.578 clat (usec): min=4613, max=20927, avg=12371.53, stdev=1128.67 00:31:45.578 lat (usec): min=4618, max=20930, avg=12374.12, stdev=1128.56 00:31:45.578 clat percentiles (usec): 00:31:45.578 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:31:45.578 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:31:45.578 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:31:45.578 | 99.00th=[14746], 99.50th=[15008], 99.90th=[18482], 99.95th=[19792], 00:31:45.578 | 99.99th=[20579] 00:31:45.578 bw ( KiB/s): min=21520, max=23240, per=99.84%, avg=22634.00, stdev=769.04, samples=4 00:31:45.578 iops : min= 5380, max= 5810, avg=5658.50, stdev=192.26, samples=4 00:31:45.578 write: IOPS=5637, BW=22.0MiB/s (23.1MB/s)(44.2MiB/2009msec); 0 zone resets 00:31:45.578 slat (usec): min=2, max=106, avg= 2.68, stdev= 1.48 00:31:45.578 clat (usec): min=2205, max=18482, avg=10135.82, stdev=924.89 00:31:45.578 lat (usec): min=2210, max=18485, avg=10138.51, stdev=924.87 00:31:45.578 clat percentiles (usec): 00:31:45.578 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9503], 00:31:45.578 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:31:45.578 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:31:45.578 | 99.00th=[12125], 99.50th=[12518], 99.90th=[16909], 99.95th=[18220], 00:31:45.578 | 99.99th=[18482] 00:31:45.578 bw ( KiB/s): min=22400, max=22656, per=99.94%, avg=22534.00, stdev=105.20, samples=4 00:31:45.578 iops : min= 5600, max= 5664, avg=5633.50, stdev=26.30, samples=4 00:31:45.578 lat (msec) : 4=0.05%, 10=22.47%, 20=77.46%, 50=0.02% 00:31:45.578 cpu : usr=62.60%, sys=36.11%, ctx=136, majf=0, minf=36 00:31:45.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:45.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:45.578 issued rwts: total=11386,11325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:45.578 00:31:45.578 Run status group 0 (all jobs): 00:31:45.578 READ: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s), io=44.5MiB (46.6MB), run=2009-2009msec 00:31:45.578 WRITE: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=44.2MiB (46.4MB), run=2009-2009msec 00:31:45.578 22:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:45.835 22:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:45.835 22:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:50.015 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:50.015 22:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:53.293 22:54:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:53.294 22:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:55.192 rmmod nvme_tcp 00:31:55.192 rmmod nvme_fabrics 00:31:55.192 rmmod nvme_keyring 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 351625 ']' 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 351625 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 351625 ']' 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 351625 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 351625 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 351625' 00:31:55.192 killing process with pid 351625 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 351625 00:31:55.192 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 351625 00:31:55.451 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:55.451 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:55.451 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:55.451 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:55.451 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:31:55.451 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:31:55.451 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:55.451 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:55.451 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:55.451 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.451 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.451 22:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.356 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:57.356 00:31:57.356 real 0m38.345s 00:31:57.356 user 2m27.367s 00:31:57.356 sys 0m7.072s 00:31:57.356 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:57.356 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.356 ************************************ 00:31:57.356 END TEST nvmf_fio_host 00:31:57.356 ************************************ 00:31:57.356 22:55:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:57.356 22:55:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:57.356 22:55:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:57.356 22:55:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.356 ************************************ 00:31:57.356 START TEST nvmf_failover 00:31:57.356 ************************************ 00:31:57.356 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:57.615 * Looking for test storage... 00:31:57.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:57.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.615 --rc genhtml_branch_coverage=1 00:31:57.615 --rc genhtml_function_coverage=1 00:31:57.615 --rc genhtml_legend=1 00:31:57.615 --rc geninfo_all_blocks=1 00:31:57.615 --rc geninfo_unexecuted_blocks=1 00:31:57.615 00:31:57.615 ' 00:31:57.615 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:57.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.615 --rc genhtml_branch_coverage=1 00:31:57.615 --rc genhtml_function_coverage=1 00:31:57.616 --rc genhtml_legend=1 00:31:57.616 --rc geninfo_all_blocks=1 00:31:57.616 --rc geninfo_unexecuted_blocks=1 00:31:57.616 00:31:57.616 ' 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:57.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.616 --rc genhtml_branch_coverage=1 00:31:57.616 --rc genhtml_function_coverage=1 00:31:57.616 --rc genhtml_legend=1 00:31:57.616 --rc geninfo_all_blocks=1 00:31:57.616 --rc geninfo_unexecuted_blocks=1 00:31:57.616 00:31:57.616 ' 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:57.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.616 --rc genhtml_branch_coverage=1 00:31:57.616 --rc genhtml_function_coverage=1 00:31:57.616 --rc genhtml_legend=1 00:31:57.616 --rc geninfo_all_blocks=1 00:31:57.616 --rc geninfo_unexecuted_blocks=1 00:31:57.616 00:31:57.616 ' 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:57.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:57.616 22:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:00.149 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.149 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:00.150 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:00.150 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:00.150 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.150 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:32:00.150 00:32:00.150 --- 10.0.0.2 ping statistics --- 00:32:00.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.150 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:32:00.150 00:32:00.150 --- 10.0.0.1 ping statistics --- 00:32:00.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.150 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=357812 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 357812 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 357812 ']' 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:00.150 [2024-10-11 22:55:03.147007] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:32:00.150 [2024-10-11 22:55:03.147084] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.150 [2024-10-11 22:55:03.214666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:00.150 [2024-10-11 22:55:03.264211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.150 [2024-10-11 22:55:03.264265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.150 [2024-10-11 22:55:03.264284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.150 [2024-10-11 22:55:03.264295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.150 [2024-10-11 22:55:03.264305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.150 [2024-10-11 22:55:03.265951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.150 [2024-10-11 22:55:03.266015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.150 [2024-10-11 22:55:03.266019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.150 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:00.716 [2024-10-11 22:55:03.699222] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.716 22:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:00.974 Malloc0 00:32:00.974 22:55:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:01.231 22:55:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:01.489 22:55:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:01.747 [2024-10-11 22:55:04.991453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.747 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:02.004 [2024-10-11 22:55:05.260229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:02.262 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:02.520 [2024-10-11 22:55:05.585302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:02.520 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=358131 00:32:02.520 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:02.520 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:02.520 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 358131 /var/tmp/bdevperf.sock 00:32:02.520 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 358131 ']' 00:32:02.520 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:02.520 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:02.520 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:02.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:02.520 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:02.520 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:02.777 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:02.777 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:02.777 22:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:03.033 NVMe0n1 00:32:03.033 22:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:03.598 00:32:03.598 22:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=358280 00:32:03.598 22:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:03.598 22:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:04.531 22:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:04.789 22:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:08.069 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:08.327 00:32:08.327 22:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:08.585 [2024-10-11 22:55:11.693950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b710 is same with the state(6) to be set 00:32:08.585 [2024-10-11 22:55:11.694021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b710 is same with the state(6) to be set 00:32:08.585 [2024-10-11 22:55:11.694037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b710 is same with the state(6) to be set 00:32:08.585 [2024-10-11 22:55:11.694049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b710 is same with the state(6) to be set 00:32:08.585 [2024-10-11 22:55:11.694061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b710 is same with the state(6) to be set 00:32:08.585 [2024-10-11 22:55:11.694073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b710 is same with the state(6) to be set 00:32:08.585 [2024-10-11 22:55:11.694085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b710 is same with the state(6) to be set 00:32:08.585 [2024-10-11 22:55:11.694097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b710 is same with the state(6) to be set 00:32:08.585 22:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:11.866 22:55:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.866 [2024-10-11 22:55:14.970077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.867 22:55:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:12.803 22:55:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:13.062 [2024-10-11 22:55:16.248146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c630 is same with the state(6) to be set 00:32:13.062 [2024-10-11 22:55:16.248206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c630 is same with the state(6) to be set 00:32:13.062 [2024-10-11 22:55:16.248231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c630 is same with the state(6) to be set 00:32:13.062 [2024-10-11 22:55:16.248244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c630 is same with the state(6) to be set 00:32:13.062 [2024-10-11 22:55:16.248255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c630 is same with the state(6) to be set 00:32:13.062 [2024-10-11 22:55:16.248267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c630 is same with the state(6) to be set 00:32:13.062 [2024-10-11 22:55:16.248278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c630 is same with the state(6) to be set 00:32:13.062 [2024-10-11 22:55:16.248290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c630 is same with the state(6) to be set 00:32:13.062 [2024-10-11 22:55:16.248301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c630 is same with the state(6) to be set 00:32:13.062 [2024-10-11 22:55:16.248313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c630 is same with the state(6) to be set 00:32:13.062 22:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 358280 00:32:19.625 { 00:32:19.625 "results": [ 00:32:19.625 { 00:32:19.625 "job": "NVMe0n1", 00:32:19.625 "core_mask": "0x1", 00:32:19.625 "workload": "verify", 00:32:19.625 "status": "finished", 00:32:19.625 "verify_range": { 00:32:19.625 "start": 0, 00:32:19.625 "length": 16384 00:32:19.625 }, 00:32:19.625 "queue_depth": 128, 00:32:19.625 "io_size": 4096, 00:32:19.625 "runtime": 15.007179, 00:32:19.625 "iops": 8129.709121214587, 00:32:19.625 "mibps": 31.75667625474448, 00:32:19.625 "io_failed": 11701, 00:32:19.625 "io_timeout": 0, 00:32:19.625 "avg_latency_us": 14339.611536397844, 00:32:19.625 "min_latency_us": 603.7807407407407, 00:32:19.625 "max_latency_us": 19612.254814814816 00:32:19.625 } 00:32:19.625 ], 00:32:19.625 "core_count": 1 00:32:19.625 } 00:32:19.625 22:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 358131 00:32:19.625 22:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 358131 ']' 00:32:19.625 22:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 358131 00:32:19.625 22:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:19.625 22:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:19.625 22:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 358131 00:32:19.625 22:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:19.625 22:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:19.625 22:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 358131' 00:32:19.625 killing process with pid 358131 00:32:19.625 22:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 358131 00:32:19.625 22:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 358131 00:32:19.625 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:19.625 [2024-10-11 22:55:05.654635] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:32:19.625 [2024-10-11 22:55:05.654738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358131 ] 00:32:19.625 [2024-10-11 22:55:05.715472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.625 [2024-10-11 22:55:05.762626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.625 Running I/O for 15 seconds... 00:32:19.625 8393.00 IOPS, 32.79 MiB/s [2024-10-11T20:55:22.893Z] [2024-10-11 22:55:07.954113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-10-11 22:55:07.954172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-10-11 22:55:07.954208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-10-11 22:55:07.954224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-10-11 22:55:07.954241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-10-11 22:55:07.954255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-10-11 22:55:07.954271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-10-11 22:55:07.954285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-10-11 22:55:07.954300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-10-11 22:55:07.954315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-10-11 22:55:07.954330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-10-11 22:55:07.954346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-10-11 22:55:07.954362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-10-11 22:55:07.954376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.626 [2024-10-11 22:55:07.954408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.626 [2024-10-11 22:55:07.954438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.626 [2024-10-11 22:55:07.954469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.626 [2024-10-11 22:55:07.954498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.626 [2024-10-11 22:55:07.954541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.626 [2024-10-11 22:55:07.954599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.626 [2024-10-11 22:55:07.954644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.626 [2024-10-11 22:55:07.954674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.626 [2024-10-11 22:55:07.954703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.954733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.954763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.954793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.954823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.954852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.954897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.954925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.954958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.954973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.954986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-10-11 22:55:07.955657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-10-11 22:55:07.955671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.955686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.955703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.955719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.955733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.955748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.955762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.955777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.955791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.955806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.955820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.955835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.955849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.955878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.955893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.955908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.627 [2024-10-11 22:55:07.955922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.955937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.955950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.955964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.955978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.955992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-10-11 22:55:07.956894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.627 [2024-10-11 22:55:07.956921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.627 [2024-10-11 22:55:07.956949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-10-11 22:55:07.956964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.628 [2024-10-11 22:55:07.956977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.628 [2024-10-11 22:55:07.957021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.628 [2024-10-11 22:55:07.957050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.628 [2024-10-11 22:55:07.957079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.628 [2024-10-11 22:55:07.957108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.628 [2024-10-11 22:55:07.957136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.957983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.957996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.958011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.958028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.958043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.958057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.958072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-10-11 22:55:07.958085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.958099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd6ac0 is same with the state(6) to be set 00:32:19.628 [2024-10-11 22:55:07.958115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.628 [2024-10-11 22:55:07.958127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.628 [2024-10-11 22:55:07.958138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79176 len:8 PRP1 0x0 PRP2 0x0 00:32:19.628 [2024-10-11 22:55:07.958151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.958205] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdd6ac0 was disconnected and freed. reset controller. 00:32:19.628 [2024-10-11 22:55:07.958222] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:19.628 [2024-10-11 22:55:07.958255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.628 [2024-10-11 22:55:07.958290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.958312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.628 [2024-10-11 22:55:07.958327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-10-11 22:55:07.958342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.628 [2024-10-11 22:55:07.958355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:07.958370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.629 [2024-10-11 22:55:07.958383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:07.958397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.629 [2024-10-11 22:55:07.961732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.629 [2024-10-11 22:55:07.961772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb5d20 (9): Bad file descriptor 00:32:19.629 [2024-10-11 22:55:08.035548] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:19.629 8041.00 IOPS, 31.41 MiB/s [2024-10-11T20:55:22.897Z] 8106.67 IOPS, 31.67 MiB/s [2024-10-11T20:55:22.897Z] 8136.25 IOPS, 31.78 MiB/s [2024-10-11T20:55:22.897Z] [2024-10-11 22:55:11.695370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.695977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.695992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-10-11 22:55:11.696435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-10-11 22:55:11.696450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.696978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.696992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.630 [2024-10-11 22:55:11.697408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-10-11 22:55:11.697440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-10-11 22:55:11.697468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-10-11 22:55:11.697495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-10-11 22:55:11.697523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-10-11 22:55:11.697573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-10-11 22:55:11.697606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-10-11 22:55:11.697634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-10-11 22:55:11.697662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-10-11 22:55:11.697699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-10-11 22:55:11.697728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-10-11 22:55:11.697743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-10-11 22:55:11.697757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.697772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.697785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.697801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.697818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.697834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.697848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.697878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.697891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.697906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.697919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.697934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.697948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.697962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.697975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.697989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.698980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.698995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.699012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.699028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.699042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.699057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.631 [2024-10-11 22:55:11.699070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-10-11 22:55:11.699085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:11.699099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:11.699127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:11.699157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:11.699186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:11.699215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:11.699244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:11.699273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:11.699302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:11.699330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:11.699358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-10-11 22:55:11.699411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-10-11 22:55:11.699424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79328 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-10-11 22:55:11.699437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699500] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdd8850 was disconnected and freed. reset controller. 00:32:19.632 [2024-10-11 22:55:11.699518] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:19.632 [2024-10-11 22:55:11.699575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.632 [2024-10-11 22:55:11.699595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.632 [2024-10-11 22:55:11.699625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.632 [2024-10-11 22:55:11.699653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.632 [2024-10-11 22:55:11.699681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:11.699694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.632 [2024-10-11 22:55:11.702979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.632 [2024-10-11 22:55:11.703019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb5d20 (9): Bad file descriptor 00:32:19.632 [2024-10-11 22:55:11.815094] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:19.632 7943.00 IOPS, 31.03 MiB/s [2024-10-11T20:55:22.900Z] 7975.83 IOPS, 31.16 MiB/s [2024-10-11T20:55:22.900Z] 8001.14 IOPS, 31.25 MiB/s [2024-10-11T20:55:22.900Z] 8033.12 IOPS, 31.38 MiB/s [2024-10-11T20:55:22.900Z] 8062.56 IOPS, 31.49 MiB/s [2024-10-11T20:55:22.900Z] [2024-10-11 22:55:16.251235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.632 [2024-10-11 22:55:16.251278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.632 [2024-10-11 22:55:16.251329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.632 [2024-10-11 22:55:16.251362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.632 [2024-10-11 22:55:16.251392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.632 [2024-10-11 22:55:16.251435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.632 [2024-10-11 22:55:16.251467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.632 [2024-10-11 22:55:16.251496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.632 [2024-10-11 22:55:16.251524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.632 [2024-10-11 22:55:16.251580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.632 [2024-10-11 22:55:16.251609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.632 [2024-10-11 22:55:16.251639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.632 [2024-10-11 22:55:16.251669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:16.251700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:16.251730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:16.251760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:16.251790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:16.251820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:16.251856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:16.251901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:16.251930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:16.251959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.251974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:16.251988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.252003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:16.252018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-10-11 22:55:16.252033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.632 [2024-10-11 22:55:16.252047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.252986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.252999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.253018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.253032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.253048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.253061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.253076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.253089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.253104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.253119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.253134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.633 [2024-10-11 22:55:16.253147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.253179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.633 [2024-10-11 22:55:16.253196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11448 len:8 PRP1 0x0 PRP2 0x0 00:32:19.633 [2024-10-11 22:55:16.253209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.253226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.633 [2024-10-11 22:55:16.253238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.633 [2024-10-11 22:55:16.253249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:8 PRP1 0x0 PRP2 0x0 00:32:19.633 [2024-10-11 22:55:16.253261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.633 [2024-10-11 22:55:16.253274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.633 [2024-10-11 22:55:16.253284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.633 [2024-10-11 22:55:16.253295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11464 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11472 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11480 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11496 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11504 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11512 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11528 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11536 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11544 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11560 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.253956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11568 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.253968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.253980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.253990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.254000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11576 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.254012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.254025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.254036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.254046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.254058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.254071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.254081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.254092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11592 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.254104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.254117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.254133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.254144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11600 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.254157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.254170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.254184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.254195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11608 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.254208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.254220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.254231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.254241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.254253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.254265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.254275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.254286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11624 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.254298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.254310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.254320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.254331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11632 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.254343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.254356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.254366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.254376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11640 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.254389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.254403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.254414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.634 [2024-10-11 22:55:16.254425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:8 PRP1 0x0 PRP2 0x0 00:32:19.634 [2024-10-11 22:55:16.254437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.634 [2024-10-11 22:55:16.254449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.634 [2024-10-11 22:55:16.254460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.254470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11656 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.254483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.254495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.254512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.254523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11664 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.254536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.254574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.254588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.254600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11672 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.254613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.254626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.254637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.254648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.254661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.254674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.254685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.254696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11688 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.254709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.254722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.254733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.254744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11696 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.254757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.254770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.254781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.254792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11704 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.254805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.254818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.254829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.254840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.254853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.254881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.254892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.254902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11720 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.254914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.254927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.254943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.254954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11728 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.254969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.254983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.254993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11736 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11752 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11760 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11768 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11784 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11792 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11800 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11816 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11824 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11832 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.635 [2024-10-11 22:55:16.255678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11848 len:8 PRP1 0x0 PRP2 0x0 00:32:19.635 [2024-10-11 22:55:16.255691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.635 [2024-10-11 22:55:16.255708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.635 [2024-10-11 22:55:16.255719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.255730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11856 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.255743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.255756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.255767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.255778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11864 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.255791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.255804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.255814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.255826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.255838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.255866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.255878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.255888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11880 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.255901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.255914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.255925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.255935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11888 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.255948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.255960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.255971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.255982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11896 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.255994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.256022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.256033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.256045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.256068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.256079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11912 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.256095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.256119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.256130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11920 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.256142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.256165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.256176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11928 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.256188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.256211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.256221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.256234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.256256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.256267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11944 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.256280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.256303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.256313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11952 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.256325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.256348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.256359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11960 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.256371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.256399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.256410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.256422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.256450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.256464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11976 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.256477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.636 [2024-10-11 22:55:16.256500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.636 [2024-10-11 22:55:16.256511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11984 len:8 PRP1 0x0 PRP2 0x0 00:32:19.636 [2024-10-11 22:55:16.256524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256605] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde1950 was disconnected and freed. reset controller. 00:32:19.636 [2024-10-11 22:55:16.256636] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:19.636 [2024-10-11 22:55:16.256670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.636 [2024-10-11 22:55:16.256688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.636 [2024-10-11 22:55:16.256717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.636 [2024-10-11 22:55:16.256745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.636 [2024-10-11 22:55:16.256772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.636 [2024-10-11 22:55:16.256785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.636 [2024-10-11 22:55:16.260057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.636 [2024-10-11 22:55:16.260095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb5d20 (9): Bad file descriptor 00:32:19.636 [2024-10-11 22:55:16.379940] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:19.636 7984.70 IOPS, 31.19 MiB/s [2024-10-11T20:55:22.904Z] 8028.36 IOPS, 31.36 MiB/s [2024-10-11T20:55:22.904Z] 8055.58 IOPS, 31.47 MiB/s [2024-10-11T20:55:22.904Z] 8092.31 IOPS, 31.61 MiB/s [2024-10-11T20:55:22.904Z] 8108.14 IOPS, 31.67 MiB/s [2024-10-11T20:55:22.904Z] 8125.07 IOPS, 31.74 MiB/s 00:32:19.636 Latency(us) 00:32:19.636 [2024-10-11T20:55:22.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.636 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:19.636 Verification LBA range: start 0x0 length 0x4000 00:32:19.636 NVMe0n1 : 15.01 8129.71 31.76 779.69 0.00 14339.61 603.78 19612.25 00:32:19.636 [2024-10-11T20:55:22.904Z] =================================================================================================================== 00:32:19.636 [2024-10-11T20:55:22.904Z] Total : 8129.71 31.76 779.69 0.00 14339.61 603.78 19612.25 00:32:19.636 Received shutdown signal, test time was about 15.000000 seconds 00:32:19.636 00:32:19.636 Latency(us) 00:32:19.636 [2024-10-11T20:55:22.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.636 [2024-10-11T20:55:22.904Z] =================================================================================================================== 00:32:19.636 [2024-10-11T20:55:22.904Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:19.636 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:19.636 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:19.636 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:19.636 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=360008 00:32:19.636 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:19.636 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 360008 /var/tmp/bdevperf.sock 00:32:19.637 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 360008 ']' 00:32:19.637 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:19.637 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:19.637 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:19.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:19.637 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:19.637 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:19.637 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:19.637 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:19.637 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:19.637 [2024-10-11 22:55:22.616131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:19.637 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:19.637 [2024-10-11 22:55:22.880886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:19.894 22:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:20.151 NVMe0n1 00:32:20.151 22:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:20.716 00:32:20.716 22:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:21.281 00:32:21.281 22:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:21.281 22:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:21.538 22:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:21.796 22:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:25.072 22:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:25.072 22:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:25.072 22:55:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=360795 00:32:25.072 22:55:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:25.072 22:55:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 360795 00:32:26.444 { 00:32:26.444 "results": [ 00:32:26.444 { 00:32:26.444 "job": "NVMe0n1", 00:32:26.444 "core_mask": "0x1", 00:32:26.444 "workload": "verify", 00:32:26.444 "status": "finished", 00:32:26.444 "verify_range": { 00:32:26.444 "start": 0, 00:32:26.444 "length": 16384 00:32:26.444 }, 00:32:26.444 "queue_depth": 128, 00:32:26.444 "io_size": 4096, 00:32:26.444 "runtime": 1.007135, 00:32:26.444 "iops": 8427.867167758046, 00:32:26.444 "mibps": 32.92135612405487, 00:32:26.444 "io_failed": 0, 00:32:26.444 "io_timeout": 0, 00:32:26.444 "avg_latency_us": 15119.632916535764, 00:32:26.444 "min_latency_us": 849.5407407407407, 00:32:26.444 "max_latency_us": 15049.007407407407 00:32:26.444 } 00:32:26.444 ], 00:32:26.444 "core_count": 1 00:32:26.444 } 00:32:26.444 22:55:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:26.444 [2024-10-11 22:55:22.119059] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:32:26.444 [2024-10-11 22:55:22.119170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360008 ] 00:32:26.444 [2024-10-11 22:55:22.182127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.444 [2024-10-11 22:55:22.228496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.444 [2024-10-11 22:55:24.852978] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:26.444 [2024-10-11 22:55:24.853064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.445 [2024-10-11 22:55:24.853086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.445 [2024-10-11 22:55:24.853104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.445 [2024-10-11 22:55:24.853118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.445 [2024-10-11 22:55:24.853132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.445 [2024-10-11 22:55:24.853162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.445 [2024-10-11 22:55:24.853178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.445 [2024-10-11 22:55:24.853192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.445 [2024-10-11 22:55:24.853206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.445 [2024-10-11 22:55:24.853253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.445 [2024-10-11 22:55:24.853284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf9d20 (9): Bad file descriptor 00:32:26.445 [2024-10-11 22:55:24.857182] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:26.445 Running I/O for 1 seconds... 00:32:26.445 8360.00 IOPS, 32.66 MiB/s 00:32:26.445 Latency(us) 00:32:26.445 [2024-10-11T20:55:29.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.445 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:26.445 Verification LBA range: start 0x0 length 0x4000 00:32:26.445 NVMe0n1 : 1.01 8427.87 32.92 0.00 0.00 15119.63 849.54 15049.01 00:32:26.445 [2024-10-11T20:55:29.713Z] =================================================================================================================== 00:32:26.445 [2024-10-11T20:55:29.713Z] Total : 8427.87 32.92 0.00 0.00 15119.63 849.54 15049.01 00:32:26.445 22:55:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:26.445 22:55:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:26.445 22:55:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:26.702 22:55:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:26.702 22:55:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:26.960 22:55:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:27.525 22:55:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 360008 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 360008 ']' 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 360008 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 360008 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 360008' 00:32:30.805 killing process with pid 360008 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 360008 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 360008 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:30.805 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:31.063 rmmod nvme_tcp 00:32:31.063 rmmod nvme_fabrics 00:32:31.063 rmmod nvme_keyring 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 357812 ']' 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 357812 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 357812 ']' 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 357812 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:31.063 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 357812 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 357812' 00:32:31.321 killing process with pid 357812 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 357812 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 357812 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.321 22:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:33.857 00:32:33.857 real 0m36.000s 00:32:33.857 user 2m6.338s 00:32:33.857 sys 0m6.457s 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:33.857 ************************************ 00:32:33.857 END TEST nvmf_failover 00:32:33.857 ************************************ 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.857 ************************************ 00:32:33.857 START TEST nvmf_host_discovery 00:32:33.857 ************************************ 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:33.857 * Looking for test storage... 00:32:33.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.857 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:33.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.858 --rc genhtml_branch_coverage=1 00:32:33.858 --rc genhtml_function_coverage=1 00:32:33.858 --rc genhtml_legend=1 00:32:33.858 --rc geninfo_all_blocks=1 00:32:33.858 --rc geninfo_unexecuted_blocks=1 00:32:33.858 00:32:33.858 ' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:33.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.858 --rc genhtml_branch_coverage=1 00:32:33.858 --rc genhtml_function_coverage=1 00:32:33.858 --rc genhtml_legend=1 00:32:33.858 --rc geninfo_all_blocks=1 00:32:33.858 --rc geninfo_unexecuted_blocks=1 00:32:33.858 00:32:33.858 ' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:33.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.858 --rc genhtml_branch_coverage=1 00:32:33.858 --rc genhtml_function_coverage=1 00:32:33.858 --rc genhtml_legend=1 00:32:33.858 --rc geninfo_all_blocks=1 00:32:33.858 --rc geninfo_unexecuted_blocks=1 00:32:33.858 00:32:33.858 ' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:33.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.858 --rc genhtml_branch_coverage=1 00:32:33.858 --rc genhtml_function_coverage=1 00:32:33.858 --rc genhtml_legend=1 00:32:33.858 --rc geninfo_all_blocks=1 00:32:33.858 --rc geninfo_unexecuted_blocks=1 00:32:33.858 00:32:33.858 ' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:33.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.858 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.859 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:33.859 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:33.859 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:33.859 22:55:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:35.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:35.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.760 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:35.761 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:35.761 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:35.761 22:55:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:36.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:32:36.018 00:32:36.018 --- 10.0.0.2 ping statistics --- 00:32:36.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.018 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:32:36.018 00:32:36.018 --- 10.0.0.1 ping statistics --- 00:32:36.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.018 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=363399 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 363399 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 363399 ']' 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:36.018 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.019 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:36.019 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.019 [2024-10-11 22:55:39.190041] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:32:36.019 [2024-10-11 22:55:39.190126] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.019 [2024-10-11 22:55:39.254979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.276 [2024-10-11 22:55:39.300728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.276 [2024-10-11 22:55:39.300773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.276 [2024-10-11 22:55:39.300788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.276 [2024-10-11 22:55:39.300801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.276 [2024-10-11 22:55:39.300811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.276 [2024-10-11 22:55:39.301343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.276 [2024-10-11 22:55:39.444560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.276 [2024-10-11 22:55:39.452749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.276 null0 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.276 null1 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=363545 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 363545 /tmp/host.sock 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 363545 ']' 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:36.276 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:36.276 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.276 [2024-10-11 22:55:39.524955] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:32:36.276 [2024-10-11 22:55:39.525033] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363545 ] 00:32:36.534 [2024-10-11 22:55:39.582444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.535 [2024-10-11 22:55:39.627351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.535 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.793 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.794 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.794 [2024-10-11 22:55:40.022330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.794 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.051 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:32:37.052 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:37.617 [2024-10-11 22:55:40.811651] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:37.617 [2024-10-11 22:55:40.811680] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:37.617 [2024-10-11 22:55:40.811704] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:37.875 [2024-10-11 22:55:40.898977] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:37.875 [2024-10-11 22:55:41.123768] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:37.875 [2024-10-11 22:55:41.123790] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.133 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.391 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.392 [2024-10-11 22:55:41.458689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:38.392 [2024-10-11 22:55:41.459259] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:38.392 [2024-10-11 22:55:41.459288] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.392 [2024-10-11 22:55:41.546002] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:38.392 22:55:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:38.392 [2024-10-11 22:55:41.605735] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:38.392 [2024-10-11 22:55:41.605757] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:38.392 [2024-10-11 22:55:41.605767] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:39.325 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:39.325 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:39.325 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:39.325 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:39.325 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:39.325 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.325 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:39.325 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.325 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.584 [2024-10-11 22:55:42.671004] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:39.584 [2024-10-11 22:55:42.671057] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:39.584 [2024-10-11 22:55:42.675324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.584 [2024-10-11 22:55:42.675358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.584 [2024-10-11 22:55:42.675391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.584 [2024-10-11 22:55:42.675405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.584 [2024-10-11 22:55:42.675419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.584 [2024-10-11 22:55:42.675432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.584 [2024-10-11 22:55:42.675447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.584 [2024-10-11 22:55:42.675461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.584 [2024-10-11 22:55:42.675481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe439c0 is same with the state(6) to be set 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:39.584 [2024-10-11 22:55:42.685317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe439c0 (9): Bad file descriptor 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.584 [2024-10-11 22:55:42.695357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:39.584 [2024-10-11 22:55:42.695629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.584 [2024-10-11 22:55:42.695660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe439c0 with addr=10.0.0.2, port=4420 00:32:39.584 [2024-10-11 22:55:42.695678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe439c0 is same with the state(6) to be set 00:32:39.584 [2024-10-11 22:55:42.695701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe439c0 (9): Bad file descriptor 00:32:39.584 [2024-10-11 22:55:42.695722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:39.584 [2024-10-11 22:55:42.695737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:39.584 [2024-10-11 22:55:42.695753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:39.584 [2024-10-11 22:55:42.695773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.584 [2024-10-11 22:55:42.705448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:39.584 [2024-10-11 22:55:42.705623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.584 [2024-10-11 22:55:42.705651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe439c0 with addr=10.0.0.2, port=4420 00:32:39.584 [2024-10-11 22:55:42.705668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe439c0 is same with the state(6) to be set 00:32:39.584 [2024-10-11 22:55:42.705689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe439c0 (9): Bad file descriptor 00:32:39.584 [2024-10-11 22:55:42.705709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:39.584 [2024-10-11 22:55:42.705723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:39.584 [2024-10-11 22:55:42.705736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:39.584 [2024-10-11 22:55:42.705756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:39.584 [2024-10-11 22:55:42.715547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:39.584 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:39.585 [2024-10-11 22:55:42.715759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:39.585 [2024-10-11 22:55:42.715788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe439c0 with addr=10.0.0.2, port=4420 00:32:39.585 [2024-10-11 22:55:42.715805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe439c0 is same with the state(6) to be set 00:32:39.585 [2024-10-11 22:55:42.715826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe439c0 (9): Bad file descriptor 00:32:39.585 [2024-10-11 22:55:42.715846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:39.585 [2024-10-11 22:55:42.715861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:39.585 [2024-10-11 22:55:42.715882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:39.585 [2024-10-11 22:55:42.715902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.585 [2024-10-11 22:55:42.725632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:39.585 [2024-10-11 22:55:42.725786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.585 [2024-10-11 22:55:42.725816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe439c0 with addr=10.0.0.2, port=4420 00:32:39.585 [2024-10-11 22:55:42.725833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe439c0 is same with the state(6) to be set 00:32:39.585 [2024-10-11 22:55:42.725865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe439c0 (9): Bad file descriptor 00:32:39.585 [2024-10-11 22:55:42.725885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:39.585 [2024-10-11 22:55:42.725898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:39.585 [2024-10-11 22:55:42.725911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:39.585 [2024-10-11 22:55:42.725930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.585 [2024-10-11 22:55:42.735710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:39.585 [2024-10-11 22:55:42.735862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.585 [2024-10-11 22:55:42.735889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe439c0 with addr=10.0.0.2, port=4420 00:32:39.585 [2024-10-11 22:55:42.735905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe439c0 is same with the state(6) to be set 00:32:39.585 [2024-10-11 22:55:42.735928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe439c0 (9): Bad file descriptor 00:32:39.585 [2024-10-11 22:55:42.735954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:39.585 [2024-10-11 22:55:42.735968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:39.585 [2024-10-11 22:55:42.735981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:39.585 [2024-10-11 22:55:42.736000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.585 [2024-10-11 22:55:42.745784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:39.585 [2024-10-11 22:55:42.745970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.585 [2024-10-11 22:55:42.745997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe439c0 with addr=10.0.0.2, port=4420 00:32:39.585 [2024-10-11 22:55:42.746013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe439c0 is same with the state(6) to be set 00:32:39.585 [2024-10-11 22:55:42.746035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe439c0 (9): Bad file descriptor 00:32:39.585 [2024-10-11 22:55:42.746055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:39.585 [2024-10-11 22:55:42.746069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:39.585 [2024-10-11 22:55:42.746082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:39.585 [2024-10-11 22:55:42.746101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.585 [2024-10-11 22:55:42.755858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:39.585 [2024-10-11 22:55:42.756060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.585 [2024-10-11 22:55:42.756086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe439c0 with addr=10.0.0.2, port=4420 00:32:39.585 [2024-10-11 22:55:42.756102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe439c0 is same with the state(6) to be set 00:32:39.585 [2024-10-11 22:55:42.756123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe439c0 (9): Bad file descriptor 00:32:39.585 [2024-10-11 22:55:42.756143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:39.585 [2024-10-11 22:55:42.756157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:39.585 [2024-10-11 22:55:42.756169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:39.585 [2024-10-11 22:55:42.756198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:39.585 [2024-10-11 22:55:42.765945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:39.585 [2024-10-11 22:55:42.766111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.585 [2024-10-11 22:55:42.766139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe439c0 with addr=10.0.0.2, port=4420 00:32:39.585 [2024-10-11 22:55:42.766156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe439c0 is same with the state(6) to be set 00:32:39.585 [2024-10-11 22:55:42.766177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe439c0 (9): Bad file descriptor 00:32:39.585 [2024-10-11 22:55:42.766197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:39.585 [2024-10-11 22:55:42.766210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:39.585 [2024-10-11 22:55:42.766224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:39.585 [2024-10-11 22:55:42.766244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.585 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.585 [2024-10-11 22:55:42.776032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:39.585 [2024-10-11 22:55:42.776220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.585 [2024-10-11 22:55:42.776247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe439c0 with addr=10.0.0.2, port=4420 00:32:39.585 [2024-10-11 22:55:42.776263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe439c0 is same with the state(6) to be set 00:32:39.585 [2024-10-11 22:55:42.776285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe439c0 (9): Bad file descriptor 00:32:39.585 [2024-10-11 22:55:42.776304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:39.585 [2024-10-11 22:55:42.776318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:39.585 [2024-10-11 22:55:42.776331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:39.585 [2024-10-11 22:55:42.776350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.585 [2024-10-11 22:55:42.786116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:39.585 [2024-10-11 22:55:42.786328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.585 [2024-10-11 22:55:42.786355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe439c0 with addr=10.0.0.2, port=4420 00:32:39.585 [2024-10-11 22:55:42.786371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe439c0 is same with the state(6) to be set 00:32:39.585 [2024-10-11 22:55:42.786392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe439c0 (9): Bad file descriptor 00:32:39.585 [2024-10-11 22:55:42.786412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:39.585 [2024-10-11 22:55:42.786426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:39.585 [2024-10-11 22:55:42.786444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:39.585 [2024-10-11 22:55:42.786464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.585 [2024-10-11 22:55:42.796198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:39.585 [2024-10-11 22:55:42.796365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.585 [2024-10-11 22:55:42.796393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe439c0 with addr=10.0.0.2, port=4420 00:32:39.585 [2024-10-11 22:55:42.796410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe439c0 is same with the state(6) to be set 00:32:39.586 [2024-10-11 22:55:42.796431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe439c0 (9): Bad file descriptor 00:32:39.586 [2024-10-11 22:55:42.796451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:39.586 [2024-10-11 22:55:42.796464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:39.586 [2024-10-11 22:55:42.796477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:39.586 [2024-10-11 22:55:42.796496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.586 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:32:39.586 22:55:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:39.586 [2024-10-11 22:55:42.798711] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:39.586 [2024-10-11 22:55:42.798740] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.958 22:55:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.958 22:55:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:40.958 22:55:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:40.958 22:55:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:40.958 22:55:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:40.958 22:55:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:40.958 22:55:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.958 22:55:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.891 [2024-10-11 22:55:45.073707] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:41.891 [2024-10-11 22:55:45.073741] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:41.891 [2024-10-11 22:55:45.073762] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:42.149 [2024-10-11 22:55:45.160021] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:42.149 [2024-10-11 22:55:45.228650] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:42.149 [2024-10-11 22:55:45.228695] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.149 request: 00:32:42.149 { 00:32:42.149 "name": "nvme", 00:32:42.149 "trtype": "tcp", 00:32:42.149 "traddr": "10.0.0.2", 00:32:42.149 "adrfam": "ipv4", 00:32:42.149 "trsvcid": "8009", 00:32:42.149 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:42.149 "wait_for_attach": true, 00:32:42.149 "method": "bdev_nvme_start_discovery", 00:32:42.149 "req_id": 1 00:32:42.149 } 00:32:42.149 Got JSON-RPC error response 00:32:42.149 response: 00:32:42.149 { 00:32:42.149 "code": -17, 00:32:42.149 "message": "File exists" 00:32:42.149 } 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:42.149 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.150 request: 00:32:42.150 { 00:32:42.150 "name": "nvme_second", 00:32:42.150 "trtype": "tcp", 00:32:42.150 "traddr": "10.0.0.2", 00:32:42.150 "adrfam": "ipv4", 00:32:42.150 "trsvcid": "8009", 00:32:42.150 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:42.150 "wait_for_attach": true, 00:32:42.150 "method": "bdev_nvme_start_discovery", 00:32:42.150 "req_id": 1 00:32:42.150 } 00:32:42.150 Got JSON-RPC error response 00:32:42.150 response: 00:32:42.150 { 00:32:42.150 "code": -17, 00:32:42.150 "message": "File exists" 00:32:42.150 } 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:42.150 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.408 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:42.408 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:42.408 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:42.408 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:42.408 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:42.408 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.408 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:42.408 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.408 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:42.408 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.408 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.340 [2024-10-11 22:55:46.444568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.340 [2024-10-11 22:55:46.444622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe79e90 with addr=10.0.0.2, port=8010 00:32:43.340 [2024-10-11 22:55:46.444659] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:43.340 [2024-10-11 22:55:46.444675] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:43.340 [2024-10-11 22:55:46.444687] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:44.272 [2024-10-11 22:55:47.447034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.272 [2024-10-11 22:55:47.447085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe79e90 with addr=10.0.0.2, port=8010 00:32:44.272 [2024-10-11 22:55:47.447107] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:44.272 [2024-10-11 22:55:47.447121] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:44.272 [2024-10-11 22:55:47.447132] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:45.204 [2024-10-11 22:55:48.449298] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:45.204 request: 00:32:45.204 { 00:32:45.204 "name": "nvme_second", 00:32:45.204 "trtype": "tcp", 00:32:45.204 "traddr": "10.0.0.2", 00:32:45.204 "adrfam": "ipv4", 00:32:45.204 "trsvcid": "8010", 00:32:45.204 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:45.204 "wait_for_attach": false, 00:32:45.204 "attach_timeout_ms": 3000, 00:32:45.204 "method": "bdev_nvme_start_discovery", 00:32:45.204 "req_id": 1 00:32:45.204 } 00:32:45.204 Got JSON-RPC error response 00:32:45.204 response: 00:32:45.204 { 00:32:45.204 "code": -110, 00:32:45.204 "message": "Connection timed out" 00:32:45.204 } 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:45.204 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.462 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:45.462 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:45.462 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 363545 00:32:45.462 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:45.463 rmmod nvme_tcp 00:32:45.463 rmmod nvme_fabrics 00:32:45.463 rmmod nvme_keyring 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 363399 ']' 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 363399 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 363399 ']' 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 363399 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 363399 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 363399' 00:32:45.463 killing process with pid 363399 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 363399 00:32:45.463 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 363399 00:32:45.721 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:45.721 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:45.721 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:45.721 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:45.721 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:32:45.721 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:45.721 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:32:45.721 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:45.721 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:45.721 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.721 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.721 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.626 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:47.626 00:32:47.626 real 0m14.176s 00:32:47.626 user 0m20.770s 00:32:47.626 sys 0m2.890s 00:32:47.626 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:47.626 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.626 ************************************ 00:32:47.626 END TEST nvmf_host_discovery 00:32:47.626 ************************************ 00:32:47.626 22:55:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:47.626 22:55:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:47.626 22:55:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:47.626 22:55:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.626 ************************************ 00:32:47.626 START TEST nvmf_host_multipath_status 00:32:47.626 ************************************ 00:32:47.626 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:47.885 * Looking for test storage... 00:32:47.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:47.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.885 --rc genhtml_branch_coverage=1 00:32:47.885 --rc genhtml_function_coverage=1 00:32:47.885 --rc genhtml_legend=1 00:32:47.885 --rc geninfo_all_blocks=1 00:32:47.885 --rc geninfo_unexecuted_blocks=1 00:32:47.885 00:32:47.885 ' 00:32:47.885 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:47.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.885 --rc genhtml_branch_coverage=1 00:32:47.885 --rc genhtml_function_coverage=1 00:32:47.885 --rc genhtml_legend=1 00:32:47.885 --rc geninfo_all_blocks=1 00:32:47.885 --rc geninfo_unexecuted_blocks=1 00:32:47.885 00:32:47.885 ' 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:47.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.885 --rc genhtml_branch_coverage=1 00:32:47.885 --rc genhtml_function_coverage=1 00:32:47.885 --rc genhtml_legend=1 00:32:47.885 --rc geninfo_all_blocks=1 00:32:47.885 --rc geninfo_unexecuted_blocks=1 00:32:47.885 00:32:47.885 ' 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:47.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.885 --rc genhtml_branch_coverage=1 00:32:47.885 --rc genhtml_function_coverage=1 00:32:47.885 --rc genhtml_legend=1 00:32:47.885 --rc geninfo_all_blocks=1 00:32:47.885 --rc geninfo_unexecuted_blocks=1 00:32:47.885 00:32:47.885 ' 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.885 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:47.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:47.886 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:50.419 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:50.419 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:50.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.419 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:50.420 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:32:50.420 00:32:50.420 --- 10.0.0.2 ping statistics --- 00:32:50.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.420 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:32:50.420 00:32:50.420 --- 10.0.0.1 ping statistics --- 00:32:50.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.420 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=366716 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 366716 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 366716 ']' 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.420 [2024-10-11 22:55:53.296665] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:32:50.420 [2024-10-11 22:55:53.296750] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.420 [2024-10-11 22:55:53.361319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:50.420 [2024-10-11 22:55:53.405592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.420 [2024-10-11 22:55:53.405662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.420 [2024-10-11 22:55:53.405676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.420 [2024-10-11 22:55:53.405702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.420 [2024-10-11 22:55:53.405712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.420 [2024-10-11 22:55:53.407177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.420 [2024-10-11 22:55:53.407182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=366716 00:32:50.420 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:50.678 [2024-10-11 22:55:53.843714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.678 22:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:50.936 Malloc0 00:32:51.194 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:51.458 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:51.718 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:51.975 [2024-10-11 22:55:55.045135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.975 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:52.232 [2024-10-11 22:55:55.313845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:52.232 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=366884 00:32:52.232 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:52.232 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:52.232 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 366884 /var/tmp/bdevperf.sock 00:32:52.232 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 366884 ']' 00:32:52.232 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:52.232 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:52.232 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:52.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:52.232 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:52.232 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:52.490 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:52.490 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:52.490 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:52.748 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:53.313 Nvme0n1 00:32:53.313 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:53.570 Nvme0n1 00:32:53.570 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:53.570 22:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:55.468 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:55.468 22:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:56.034 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:56.034 22:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:57.407 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:57.407 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:57.407 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.407 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:57.407 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.407 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:57.407 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.407 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:57.665 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:57.665 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:57.665 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.665 22:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:57.923 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.923 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:57.923 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.923 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:58.181 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.181 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:58.181 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:58.181 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.439 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.439 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:58.439 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.439 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:58.696 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.696 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:58.696 22:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:59.260 22:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:59.260 22:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:00.636 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:00.636 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:00.636 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.636 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:00.636 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:00.636 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:00.636 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.636 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:00.894 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.894 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:00.894 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.894 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:01.153 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.153 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:01.153 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.153 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:01.411 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.411 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:01.411 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.411 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:01.669 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.669 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:01.669 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.669 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:01.927 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.927 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:01.927 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:02.185 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:02.751 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:03.685 22:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:03.685 22:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:03.685 22:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.685 22:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:03.943 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.943 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:03.943 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.943 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:04.201 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:04.201 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:04.201 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.201 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:04.459 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.459 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:04.459 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.459 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:04.717 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.717 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:04.717 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.717 22:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:04.975 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.975 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:04.975 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.975 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:05.233 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.233 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:05.234 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:05.492 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:05.750 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:06.684 22:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:06.684 22:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:06.684 22:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.684 22:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.250 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.250 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:07.250 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.250 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.250 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.250 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.250 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.250 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:07.816 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.816 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:07.816 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.816 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:07.816 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.816 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:07.816 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.816 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:08.382 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.382 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:08.382 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.382 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.382 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.382 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:08.382 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:08.947 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:08.947 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:10.320 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:10.320 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:10.320 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.320 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:10.320 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.320 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:10.320 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.320 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:10.578 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.578 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:10.578 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.578 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:10.836 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.836 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:10.836 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.836 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:11.094 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.094 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:11.094 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.094 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:11.352 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:11.352 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:11.352 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.352 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:11.610 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:11.610 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:11.610 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:11.867 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:12.125 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:13.499 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:13.499 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:13.499 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.499 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:13.499 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.499 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:13.499 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.499 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:13.757 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.757 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:13.757 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.757 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:14.014 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.015 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:14.015 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.015 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:14.272 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.272 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:14.272 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.272 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:14.531 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:14.531 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:14.531 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.531 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:14.789 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.789 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:15.355 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:15.355 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:15.355 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:15.613 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:16.984 22:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:16.984 22:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:16.985 22:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.985 22:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:16.985 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.985 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:16.985 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.985 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:17.242 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.242 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:17.242 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.242 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:17.500 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.500 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:17.500 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.500 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:17.758 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.758 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:17.758 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.759 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:18.016 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.016 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:18.016 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.017 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:18.582 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.582 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:18.582 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:18.582 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:18.840 22:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:20.214 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:20.214 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:20.214 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.214 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:20.214 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:20.214 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:20.214 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.214 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:20.471 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.471 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:20.471 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.471 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:20.729 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.729 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:20.729 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.729 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:20.987 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.987 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:20.987 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.987 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:21.245 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.245 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:21.245 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.245 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:21.502 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.502 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:21.503 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:22.068 22:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:22.068 22:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:23.441 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:23.441 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:23.441 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.441 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:23.441 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.441 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:23.441 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.442 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:23.700 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.700 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:23.700 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.700 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:23.958 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.958 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:23.958 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.958 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:24.216 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.216 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:24.216 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.216 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:24.474 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.474 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:24.474 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.474 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:24.732 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.732 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:24.732 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:24.990 22:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:25.554 22:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:26.487 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:26.487 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:26.487 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.487 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:26.745 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.746 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:26.746 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.746 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:27.004 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:27.004 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:27.004 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.004 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:27.262 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.262 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:27.262 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.262 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:27.520 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.520 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:27.520 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.520 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:27.778 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.778 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:27.778 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.778 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:28.036 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:28.036 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 366884 00:33:28.036 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 366884 ']' 00:33:28.036 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 366884 00:33:28.036 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:28.036 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:28.036 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 366884 00:33:28.036 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:33:28.036 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:33:28.036 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 366884' 00:33:28.036 killing process with pid 366884 00:33:28.036 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 366884 00:33:28.036 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 366884 00:33:28.036 { 00:33:28.036 "results": [ 00:33:28.036 { 00:33:28.036 "job": "Nvme0n1", 00:33:28.036 "core_mask": "0x4", 00:33:28.036 "workload": "verify", 00:33:28.036 "status": "terminated", 00:33:28.036 "verify_range": { 00:33:28.036 "start": 0, 00:33:28.036 "length": 16384 00:33:28.036 }, 00:33:28.036 "queue_depth": 128, 00:33:28.036 "io_size": 4096, 00:33:28.036 "runtime": 34.368694, 00:33:28.036 "iops": 7885.693881763445, 00:33:28.036 "mibps": 30.803491725638455, 00:33:28.036 "io_failed": 0, 00:33:28.036 "io_timeout": 0, 00:33:28.036 "avg_latency_us": 16204.35558527035, 00:33:28.036 "min_latency_us": 512.7585185185185, 00:33:28.036 "max_latency_us": 4026531.84 00:33:28.036 } 00:33:28.036 ], 00:33:28.036 "core_count": 1 00:33:28.036 } 00:33:28.298 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 366884 00:33:28.298 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:28.298 [2024-10-11 22:55:55.381325] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:33:28.298 [2024-10-11 22:55:55.381448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366884 ] 00:33:28.298 [2024-10-11 22:55:55.443936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.298 [2024-10-11 22:55:55.490163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:28.298 Running I/O for 90 seconds... 00:33:28.298 8083.00 IOPS, 31.57 MiB/s [2024-10-11T20:56:31.566Z] 8201.50 IOPS, 32.04 MiB/s [2024-10-11T20:56:31.566Z] 8241.67 IOPS, 32.19 MiB/s [2024-10-11T20:56:31.566Z] 8275.75 IOPS, 32.33 MiB/s [2024-10-11T20:56:31.566Z] 8280.80 IOPS, 32.35 MiB/s [2024-10-11T20:56:31.566Z] 8300.00 IOPS, 32.42 MiB/s [2024-10-11T20:56:31.566Z] 8315.71 IOPS, 32.48 MiB/s [2024-10-11T20:56:31.566Z] 8328.88 IOPS, 32.53 MiB/s [2024-10-11T20:56:31.566Z] 8348.22 IOPS, 32.61 MiB/s [2024-10-11T20:56:31.566Z] 8351.80 IOPS, 32.62 MiB/s [2024-10-11T20:56:31.566Z] 8355.91 IOPS, 32.64 MiB/s [2024-10-11T20:56:31.566Z] 8356.33 IOPS, 32.64 MiB/s [2024-10-11T20:56:31.566Z] 8352.77 IOPS, 32.63 MiB/s [2024-10-11T20:56:31.566Z] 8354.79 IOPS, 32.64 MiB/s [2024-10-11T20:56:31.566Z] 8367.07 IOPS, 32.68 MiB/s [2024-10-11T20:56:31.566Z] [2024-10-11 22:56:11.888934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.888982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.889058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.889081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.889106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.889139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.889163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.889180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.889220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.889236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.889259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.889277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.889299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.889316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.889340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.889357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.889944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.889970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.890030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.890073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.890114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.890154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.890194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.890249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.298 [2024-10-11 22:56:11.890289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.298 [2024-10-11 22:56:11.890897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:28.298 [2024-10-11 22:56:11.890920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.299 [2024-10-11 22:56:11.890937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.890961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.299 [2024-10-11 22:56:11.890978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.299 [2024-10-11 22:56:11.891716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.891970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.891987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:28.299 [2024-10-11 22:56:11.892922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.299 [2024-10-11 22:56:11.892953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.892988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.300 [2024-10-11 22:56:11.893007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.300 [2024-10-11 22:56:11.893053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.300 [2024-10-11 22:56:11.893098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.300 [2024-10-11 22:56:11.893143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.300 [2024-10-11 22:56:11.893202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.300 [2024-10-11 22:56:11.893246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.300 [2024-10-11 22:56:11.893290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.893958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.893985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:28.300 [2024-10-11 22:56:11.894914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.300 [2024-10-11 22:56:11.894930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.894956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.301 [2024-10-11 22:56:11.894973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.894999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.301 [2024-10-11 22:56:11.895016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.895043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.301 [2024-10-11 22:56:11.895059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.895085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.301 [2024-10-11 22:56:11.895102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.895129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.301 [2024-10-11 22:56:11.895145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.895171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.301 [2024-10-11 22:56:11.895191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.895219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:11.895236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.895263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:11.895279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.895306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:11.895322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.895349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:11.895366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.895392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:11.895409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.895436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:11.895452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.895479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:11.895496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:11.895522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:11.895592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:28.301 7860.06 IOPS, 30.70 MiB/s [2024-10-11T20:56:31.569Z] 7397.71 IOPS, 28.90 MiB/s [2024-10-11T20:56:31.569Z] 6986.72 IOPS, 27.29 MiB/s [2024-10-11T20:56:31.569Z] 6619.00 IOPS, 25.86 MiB/s [2024-10-11T20:56:31.569Z] 6701.55 IOPS, 26.18 MiB/s [2024-10-11T20:56:31.569Z] 6788.05 IOPS, 26.52 MiB/s [2024-10-11T20:56:31.569Z] 6886.77 IOPS, 26.90 MiB/s [2024-10-11T20:56:31.569Z] 7076.57 IOPS, 27.64 MiB/s [2024-10-11T20:56:31.569Z] 7229.38 IOPS, 28.24 MiB/s [2024-10-11T20:56:31.569Z] 7385.48 IOPS, 28.85 MiB/s [2024-10-11T20:56:31.569Z] 7419.35 IOPS, 28.98 MiB/s [2024-10-11T20:56:31.569Z] 7450.30 IOPS, 29.10 MiB/s [2024-10-11T20:56:31.569Z] 7488.79 IOPS, 29.25 MiB/s [2024-10-11T20:56:31.569Z] 7572.41 IOPS, 29.58 MiB/s [2024-10-11T20:56:31.569Z] 7690.77 IOPS, 30.04 MiB/s [2024-10-11T20:56:31.569Z] 7796.65 IOPS, 30.46 MiB/s [2024-10-11T20:56:31.569Z] [2024-10-11 22:56:28.510036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.301 [2024-10-11 22:56:28.510092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.301 [2024-10-11 22:56:28.510185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.301 [2024-10-11 22:56:28.510239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.301 [2024-10-11 22:56:28.510278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.301 [2024-10-11 22:56:28.510317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.301 [2024-10-11 22:56:28.510370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.510969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.510992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.511009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.511031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.511047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.511069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.511085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.511107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.511139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.511161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.301 [2024-10-11 22:56:28.511177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:28.301 [2024-10-11 22:56:28.511198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.511229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.511253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.511276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.511304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.511328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.511360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.511378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.511818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.511843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.511871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.511889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.511912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.511930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.511953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.511969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.511992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.512009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.512064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.512103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.512159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.512309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.512352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.512403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.512454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.512497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.512963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.512986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.513003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.513026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.302 [2024-10-11 22:56:28.513043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.514065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.514090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.514120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.514144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:28.302 [2024-10-11 22:56:28.514171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.302 [2024-10-11 22:56:28.514194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:28.302 7857.69 IOPS, 30.69 MiB/s [2024-10-11T20:56:31.570Z] 7873.67 IOPS, 30.76 MiB/s [2024-10-11T20:56:31.570Z] 7880.94 IOPS, 30.78 MiB/s [2024-10-11T20:56:31.570Z] Received shutdown signal, test time was about 34.369498 seconds 00:33:28.302 00:33:28.302 Latency(us) 00:33:28.302 [2024-10-11T20:56:31.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.303 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:28.303 Verification LBA range: start 0x0 length 0x4000 00:33:28.303 Nvme0n1 : 34.37 7885.69 30.80 0.00 0.00 16204.36 512.76 4026531.84 00:33:28.303 [2024-10-11T20:56:31.571Z] =================================================================================================================== 00:33:28.303 [2024-10-11T20:56:31.571Z] Total : 7885.69 30.80 0.00 0.00 16204.36 512.76 4026531.84 00:33:28.303 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:28.561 rmmod nvme_tcp 00:33:28.561 rmmod nvme_fabrics 00:33:28.561 rmmod nvme_keyring 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 366716 ']' 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 366716 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 366716 ']' 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 366716 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 366716 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 366716' 00:33:28.561 killing process with pid 366716 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 366716 00:33:28.561 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 366716 00:33:28.820 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:28.820 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:28.820 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:28.820 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:28.820 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:33:28.820 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:28.820 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:33:28.820 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:28.820 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:28.820 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.820 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:28.820 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:31.355 00:33:31.355 real 0m43.231s 00:33:31.355 user 2m11.929s 00:33:31.355 sys 0m10.746s 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:31.355 ************************************ 00:33:31.355 END TEST nvmf_host_multipath_status 00:33:31.355 ************************************ 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.355 ************************************ 00:33:31.355 START TEST nvmf_discovery_remove_ifc 00:33:31.355 ************************************ 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:31.355 * Looking for test storage... 00:33:31.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:31.355 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:31.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.356 --rc genhtml_branch_coverage=1 00:33:31.356 --rc genhtml_function_coverage=1 00:33:31.356 --rc genhtml_legend=1 00:33:31.356 --rc geninfo_all_blocks=1 00:33:31.356 --rc geninfo_unexecuted_blocks=1 00:33:31.356 00:33:31.356 ' 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:31.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.356 --rc genhtml_branch_coverage=1 00:33:31.356 --rc genhtml_function_coverage=1 00:33:31.356 --rc genhtml_legend=1 00:33:31.356 --rc geninfo_all_blocks=1 00:33:31.356 --rc geninfo_unexecuted_blocks=1 00:33:31.356 00:33:31.356 ' 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:31.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.356 --rc genhtml_branch_coverage=1 00:33:31.356 --rc genhtml_function_coverage=1 00:33:31.356 --rc genhtml_legend=1 00:33:31.356 --rc geninfo_all_blocks=1 00:33:31.356 --rc geninfo_unexecuted_blocks=1 00:33:31.356 00:33:31.356 ' 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:31.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.356 --rc genhtml_branch_coverage=1 00:33:31.356 --rc genhtml_function_coverage=1 00:33:31.356 --rc genhtml_legend=1 00:33:31.356 --rc geninfo_all_blocks=1 00:33:31.356 --rc geninfo_unexecuted_blocks=1 00:33:31.356 00:33:31.356 ' 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:31.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:31.356 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:31.357 22:56:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:33.888 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:33.889 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:33.889 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:33.889 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:33.889 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:33.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:33.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:33:33.889 00:33:33.889 --- 10.0.0.2 ping statistics --- 00:33:33.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.889 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:33.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:33.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:33:33.889 00:33:33.889 --- 10.0.0.1 ping statistics --- 00:33:33.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.889 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=373331 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 373331 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 373331 ']' 00:33:33.889 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.890 [2024-10-11 22:56:36.748952] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:33:33.890 [2024-10-11 22:56:36.749046] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.890 [2024-10-11 22:56:36.815997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.890 [2024-10-11 22:56:36.860109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.890 [2024-10-11 22:56:36.860171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.890 [2024-10-11 22:56:36.860200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.890 [2024-10-11 22:56:36.860211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.890 [2024-10-11 22:56:36.860220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.890 [2024-10-11 22:56:36.860816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.890 22:56:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.890 [2024-10-11 22:56:37.001136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.890 [2024-10-11 22:56:37.009300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:33.890 null0 00:33:33.890 [2024-10-11 22:56:37.041263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.890 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.890 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=373354 00:33:33.890 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:33.890 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 373354 /tmp/host.sock 00:33:33.890 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 373354 ']' 00:33:33.890 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:33.890 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:33.890 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:33.890 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:33.890 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:33.890 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.890 [2024-10-11 22:56:37.107512] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:33:33.890 [2024-10-11 22:56:37.107606] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373354 ] 00:33:34.148 [2024-10-11 22:56:37.170220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.148 [2024-10-11 22:56:37.222347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.148 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:34.148 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:34.148 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:34.148 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:34.148 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.148 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.148 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.148 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:34.148 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.148 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.406 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.406 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:34.406 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.406 22:56:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.339 [2024-10-11 22:56:38.499655] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:35.339 [2024-10-11 22:56:38.499683] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:35.339 [2024-10-11 22:56:38.499706] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:35.339 [2024-10-11 22:56:38.587007] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:35.597 [2024-10-11 22:56:38.650225] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:35.597 [2024-10-11 22:56:38.650281] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:35.597 [2024-10-11 22:56:38.650315] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:35.597 [2024-10-11 22:56:38.650336] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:35.597 [2024-10-11 22:56:38.650357] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:35.597 [2024-10-11 22:56:38.657121] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x80e3d0 was disconnected and freed. delete nvme_qpair. 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:35.597 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:36.531 22:56:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:36.531 22:56:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:36.531 22:56:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:36.531 22:56:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.531 22:56:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.531 22:56:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:36.531 22:56:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:36.789 22:56:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.789 22:56:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:36.789 22:56:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:37.723 22:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:37.723 22:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.723 22:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:37.723 22:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.723 22:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:37.723 22:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.723 22:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:37.723 22:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.723 22:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:37.723 22:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:38.677 22:56:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:38.677 22:56:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.677 22:56:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:38.677 22:56:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.677 22:56:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:38.677 22:56:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.677 22:56:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:38.677 22:56:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.677 22:56:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:38.677 22:56:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:39.663 22:56:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.663 22:56:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.663 22:56:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.663 22:56:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.663 22:56:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.663 22:56:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.663 22:56:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.950 22:56:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.950 22:56:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:39.950 22:56:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:40.968 22:56:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:40.968 22:56:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.968 22:56:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:40.968 22:56:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.968 22:56:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.968 22:56:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:40.968 22:56:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:40.968 22:56:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.968 22:56:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:40.968 22:56:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:40.968 [2024-10-11 22:56:44.091984] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:40.968 [2024-10-11 22:56:44.092049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.968 [2024-10-11 22:56:44.092070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.968 [2024-10-11 22:56:44.092087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.968 [2024-10-11 22:56:44.092100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.968 [2024-10-11 22:56:44.092112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.968 [2024-10-11 22:56:44.092124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.968 [2024-10-11 22:56:44.092136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.968 [2024-10-11 22:56:44.092148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.968 [2024-10-11 22:56:44.092161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.968 [2024-10-11 22:56:44.092173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.968 [2024-10-11 22:56:44.092185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eac80 is same with the state(6) to be set 00:33:40.968 [2024-10-11 22:56:44.102006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eac80 (9): Bad file descriptor 00:33:40.968 [2024-10-11 22:56:44.112047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:41.901 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.901 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.902 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.902 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.902 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.902 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.902 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.159 [2024-10-11 22:56:45.177576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:42.159 [2024-10-11 22:56:45.177626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eac80 with addr=10.0.0.2, port=4420 00:33:42.159 [2024-10-11 22:56:45.177645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eac80 is same with the state(6) to be set 00:33:42.159 [2024-10-11 22:56:45.177671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eac80 (9): Bad file descriptor 00:33:42.159 [2024-10-11 22:56:45.178040] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:42.159 [2024-10-11 22:56:45.178076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:42.159 [2024-10-11 22:56:45.178092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:42.159 [2024-10-11 22:56:45.178105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:42.159 [2024-10-11 22:56:45.178125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:42.159 [2024-10-11 22:56:45.178140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:42.159 22:56:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.159 22:56:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:42.159 22:56:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:43.091 [2024-10-11 22:56:46.180627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.091 [2024-10-11 22:56:46.180653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.091 [2024-10-11 22:56:46.180683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.091 [2024-10-11 22:56:46.180694] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:43.091 [2024-10-11 22:56:46.180714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.091 [2024-10-11 22:56:46.180744] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:43.091 [2024-10-11 22:56:46.180789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.091 [2024-10-11 22:56:46.180809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.091 [2024-10-11 22:56:46.180825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.092 [2024-10-11 22:56:46.180838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.092 [2024-10-11 22:56:46.180851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.092 [2024-10-11 22:56:46.180863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.092 [2024-10-11 22:56:46.180899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.092 [2024-10-11 22:56:46.180912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.092 [2024-10-11 22:56:46.180925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.092 [2024-10-11 22:56:46.180936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.092 [2024-10-11 22:56:46.180948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:43.092 [2024-10-11 22:56:46.181198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7da390 (9): Bad file descriptor 00:33:43.092 [2024-10-11 22:56:46.182212] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:43.092 [2024-10-11 22:56:46.182232] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:43.092 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:44.464 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.464 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.464 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.464 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.464 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.464 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.464 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.464 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.464 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:44.464 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:45.029 [2024-10-11 22:56:48.194476] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:45.029 [2024-10-11 22:56:48.194511] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:45.029 [2024-10-11 22:56:48.194548] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:45.029 [2024-10-11 22:56:48.280806] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:45.287 [2024-10-11 22:56:48.337503] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:45.287 [2024-10-11 22:56:48.337572] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:45.287 [2024-10-11 22:56:48.337604] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:45.287 [2024-10-11 22:56:48.337626] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:45.287 [2024-10-11 22:56:48.337638] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:45.287 [2024-10-11 22:56:48.343197] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7e65a0 was disconnected and freed. delete nvme_qpair. 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 373354 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 373354 ']' 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 373354 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 373354 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 373354' 00:33:45.287 killing process with pid 373354 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 373354 00:33:45.287 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 373354 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:45.545 rmmod nvme_tcp 00:33:45.545 rmmod nvme_fabrics 00:33:45.545 rmmod nvme_keyring 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 373331 ']' 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 373331 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 373331 ']' 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 373331 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 373331 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 373331' 00:33:45.545 killing process with pid 373331 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 373331 00:33:45.545 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 373331 00:33:45.804 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:45.804 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:45.804 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:45.804 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:45.804 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:33:45.804 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:45.804 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:33:45.804 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:45.804 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:45.804 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.804 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.804 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.709 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:47.709 00:33:47.709 real 0m16.802s 00:33:47.709 user 0m23.551s 00:33:47.709 sys 0m3.084s 00:33:47.709 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:47.709 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:47.709 ************************************ 00:33:47.709 END TEST nvmf_discovery_remove_ifc 00:33:47.709 ************************************ 00:33:47.967 22:56:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:47.967 22:56:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:47.967 22:56:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:47.967 22:56:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.967 ************************************ 00:33:47.967 START TEST nvmf_identify_kernel_target 00:33:47.967 ************************************ 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:47.967 * Looking for test storage... 00:33:47.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:47.967 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:47.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.968 --rc genhtml_branch_coverage=1 00:33:47.968 --rc genhtml_function_coverage=1 00:33:47.968 --rc genhtml_legend=1 00:33:47.968 --rc geninfo_all_blocks=1 00:33:47.968 --rc geninfo_unexecuted_blocks=1 00:33:47.968 00:33:47.968 ' 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:47.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.968 --rc genhtml_branch_coverage=1 00:33:47.968 --rc genhtml_function_coverage=1 00:33:47.968 --rc genhtml_legend=1 00:33:47.968 --rc geninfo_all_blocks=1 00:33:47.968 --rc geninfo_unexecuted_blocks=1 00:33:47.968 00:33:47.968 ' 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:47.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.968 --rc genhtml_branch_coverage=1 00:33:47.968 --rc genhtml_function_coverage=1 00:33:47.968 --rc genhtml_legend=1 00:33:47.968 --rc geninfo_all_blocks=1 00:33:47.968 --rc geninfo_unexecuted_blocks=1 00:33:47.968 00:33:47.968 ' 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:47.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.968 --rc genhtml_branch_coverage=1 00:33:47.968 --rc genhtml_function_coverage=1 00:33:47.968 --rc genhtml_legend=1 00:33:47.968 --rc geninfo_all_blocks=1 00:33:47.968 --rc geninfo_unexecuted_blocks=1 00:33:47.968 00:33:47.968 ' 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:47.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:47.968 22:56:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:50.500 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:50.500 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:50.500 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:50.500 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:50.500 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:50.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:33:50.501 00:33:50.501 --- 10.0.0.2 ping statistics --- 00:33:50.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.501 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:33:50.501 00:33:50.501 --- 10.0.0.1 ping statistics --- 00:33:50.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.501 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:50.501 22:56:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:51.436 Waiting for block devices as requested 00:33:51.436 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:51.696 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:51.696 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:51.696 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:51.955 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:51.955 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:51.955 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:51.955 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:52.213 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:52.213 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:52.213 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:52.213 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:52.472 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:52.472 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:52.472 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:52.732 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:52.732 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:52.732 22:56:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:33:52.732 22:56:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:52.732 22:56:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:33:52.732 22:56:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:52.732 22:56:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:52.732 22:56:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:52.732 22:56:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:33:52.732 22:56:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:52.732 22:56:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:52.732 No valid GPT data, bailing 00:33:52.732 22:56:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:52.991 00:33:52.991 Discovery Log Number of Records 2, Generation counter 2 00:33:52.991 =====Discovery Log Entry 0====== 00:33:52.991 trtype: tcp 00:33:52.991 adrfam: ipv4 00:33:52.991 subtype: current discovery subsystem 00:33:52.991 treq: not specified, sq flow control disable supported 00:33:52.991 portid: 1 00:33:52.991 trsvcid: 4420 00:33:52.991 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:52.991 traddr: 10.0.0.1 00:33:52.991 eflags: none 00:33:52.991 sectype: none 00:33:52.991 =====Discovery Log Entry 1====== 00:33:52.991 trtype: tcp 00:33:52.991 adrfam: ipv4 00:33:52.991 subtype: nvme subsystem 00:33:52.991 treq: not specified, sq flow control disable supported 00:33:52.991 portid: 1 00:33:52.991 trsvcid: 4420 00:33:52.991 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:52.991 traddr: 10.0.0.1 00:33:52.991 eflags: none 00:33:52.991 sectype: none 00:33:52.991 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:52.991 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:52.991 ===================================================== 00:33:52.991 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:52.991 ===================================================== 00:33:52.991 Controller Capabilities/Features 00:33:52.991 ================================ 00:33:52.991 Vendor ID: 0000 00:33:52.991 Subsystem Vendor ID: 0000 00:33:52.991 Serial Number: 2fe0ad68c13dbc8aec38 00:33:52.991 Model Number: Linux 00:33:52.991 Firmware Version: 6.8.9-20 00:33:52.991 Recommended Arb Burst: 0 00:33:52.991 IEEE OUI Identifier: 00 00 00 00:33:52.991 Multi-path I/O 00:33:52.991 May have multiple subsystem ports: No 00:33:52.991 May have multiple controllers: No 00:33:52.991 Associated with SR-IOV VF: No 00:33:52.991 Max Data Transfer Size: Unlimited 00:33:52.991 Max Number of Namespaces: 0 00:33:52.991 Max Number of I/O Queues: 1024 00:33:52.991 NVMe Specification Version (VS): 1.3 00:33:52.991 NVMe Specification Version (Identify): 1.3 00:33:52.991 Maximum Queue Entries: 1024 00:33:52.991 Contiguous Queues Required: No 00:33:52.991 Arbitration Mechanisms Supported 00:33:52.991 Weighted Round Robin: Not Supported 00:33:52.991 Vendor Specific: Not Supported 00:33:52.991 Reset Timeout: 7500 ms 00:33:52.991 Doorbell Stride: 4 bytes 00:33:52.991 NVM Subsystem Reset: Not Supported 00:33:52.991 Command Sets Supported 00:33:52.991 NVM Command Set: Supported 00:33:52.991 Boot Partition: Not Supported 00:33:52.991 Memory Page Size Minimum: 4096 bytes 00:33:52.991 Memory Page Size Maximum: 4096 bytes 00:33:52.991 Persistent Memory Region: Not Supported 00:33:52.991 Optional Asynchronous Events Supported 00:33:52.991 Namespace Attribute Notices: Not Supported 00:33:52.991 Firmware Activation Notices: Not Supported 00:33:52.991 ANA Change Notices: Not Supported 00:33:52.991 PLE Aggregate Log Change Notices: Not Supported 00:33:52.991 LBA Status Info Alert Notices: Not Supported 00:33:52.991 EGE Aggregate Log Change Notices: Not Supported 00:33:52.991 Normal NVM Subsystem Shutdown event: Not Supported 00:33:52.991 Zone Descriptor Change Notices: Not Supported 00:33:52.991 Discovery Log Change Notices: Supported 00:33:52.991 Controller Attributes 00:33:52.991 128-bit Host Identifier: Not Supported 00:33:52.991 Non-Operational Permissive Mode: Not Supported 00:33:52.991 NVM Sets: Not Supported 00:33:52.991 Read Recovery Levels: Not Supported 00:33:52.991 Endurance Groups: Not Supported 00:33:52.991 Predictable Latency Mode: Not Supported 00:33:52.991 Traffic Based Keep ALive: Not Supported 00:33:52.991 Namespace Granularity: Not Supported 00:33:52.991 SQ Associations: Not Supported 00:33:52.991 UUID List: Not Supported 00:33:52.991 Multi-Domain Subsystem: Not Supported 00:33:52.991 Fixed Capacity Management: Not Supported 00:33:52.991 Variable Capacity Management: Not Supported 00:33:52.991 Delete Endurance Group: Not Supported 00:33:52.991 Delete NVM Set: Not Supported 00:33:52.991 Extended LBA Formats Supported: Not Supported 00:33:52.991 Flexible Data Placement Supported: Not Supported 00:33:52.991 00:33:52.991 Controller Memory Buffer Support 00:33:52.991 ================================ 00:33:52.991 Supported: No 00:33:52.991 00:33:52.991 Persistent Memory Region Support 00:33:52.991 ================================ 00:33:52.991 Supported: No 00:33:52.991 00:33:52.991 Admin Command Set Attributes 00:33:52.991 ============================ 00:33:52.991 Security Send/Receive: Not Supported 00:33:52.991 Format NVM: Not Supported 00:33:52.991 Firmware Activate/Download: Not Supported 00:33:52.992 Namespace Management: Not Supported 00:33:52.992 Device Self-Test: Not Supported 00:33:52.992 Directives: Not Supported 00:33:52.992 NVMe-MI: Not Supported 00:33:52.992 Virtualization Management: Not Supported 00:33:52.992 Doorbell Buffer Config: Not Supported 00:33:52.992 Get LBA Status Capability: Not Supported 00:33:52.992 Command & Feature Lockdown Capability: Not Supported 00:33:52.992 Abort Command Limit: 1 00:33:52.992 Async Event Request Limit: 1 00:33:52.992 Number of Firmware Slots: N/A 00:33:52.992 Firmware Slot 1 Read-Only: N/A 00:33:52.992 Firmware Activation Without Reset: N/A 00:33:52.992 Multiple Update Detection Support: N/A 00:33:52.992 Firmware Update Granularity: No Information Provided 00:33:52.992 Per-Namespace SMART Log: No 00:33:52.992 Asymmetric Namespace Access Log Page: Not Supported 00:33:52.992 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:52.992 Command Effects Log Page: Not Supported 00:33:52.992 Get Log Page Extended Data: Supported 00:33:52.992 Telemetry Log Pages: Not Supported 00:33:52.992 Persistent Event Log Pages: Not Supported 00:33:52.992 Supported Log Pages Log Page: May Support 00:33:52.992 Commands Supported & Effects Log Page: Not Supported 00:33:52.992 Feature Identifiers & Effects Log Page:May Support 00:33:52.992 NVMe-MI Commands & Effects Log Page: May Support 00:33:52.992 Data Area 4 for Telemetry Log: Not Supported 00:33:52.992 Error Log Page Entries Supported: 1 00:33:52.992 Keep Alive: Not Supported 00:33:52.992 00:33:52.992 NVM Command Set Attributes 00:33:52.992 ========================== 00:33:52.992 Submission Queue Entry Size 00:33:52.992 Max: 1 00:33:52.992 Min: 1 00:33:52.992 Completion Queue Entry Size 00:33:52.992 Max: 1 00:33:52.992 Min: 1 00:33:52.992 Number of Namespaces: 0 00:33:52.992 Compare Command: Not Supported 00:33:52.992 Write Uncorrectable Command: Not Supported 00:33:52.992 Dataset Management Command: Not Supported 00:33:52.992 Write Zeroes Command: Not Supported 00:33:52.992 Set Features Save Field: Not Supported 00:33:52.992 Reservations: Not Supported 00:33:52.992 Timestamp: Not Supported 00:33:52.992 Copy: Not Supported 00:33:52.992 Volatile Write Cache: Not Present 00:33:52.992 Atomic Write Unit (Normal): 1 00:33:52.992 Atomic Write Unit (PFail): 1 00:33:52.992 Atomic Compare & Write Unit: 1 00:33:52.992 Fused Compare & Write: Not Supported 00:33:52.992 Scatter-Gather List 00:33:52.992 SGL Command Set: Supported 00:33:52.992 SGL Keyed: Not Supported 00:33:52.992 SGL Bit Bucket Descriptor: Not Supported 00:33:52.992 SGL Metadata Pointer: Not Supported 00:33:52.992 Oversized SGL: Not Supported 00:33:52.992 SGL Metadata Address: Not Supported 00:33:52.992 SGL Offset: Supported 00:33:52.992 Transport SGL Data Block: Not Supported 00:33:52.992 Replay Protected Memory Block: Not Supported 00:33:52.992 00:33:52.992 Firmware Slot Information 00:33:52.992 ========================= 00:33:52.992 Active slot: 0 00:33:52.992 00:33:52.992 00:33:52.992 Error Log 00:33:52.992 ========= 00:33:52.992 00:33:52.992 Active Namespaces 00:33:52.992 ================= 00:33:52.992 Discovery Log Page 00:33:52.992 ================== 00:33:52.992 Generation Counter: 2 00:33:52.992 Number of Records: 2 00:33:52.992 Record Format: 0 00:33:52.992 00:33:52.992 Discovery Log Entry 0 00:33:52.992 ---------------------- 00:33:52.992 Transport Type: 3 (TCP) 00:33:52.992 Address Family: 1 (IPv4) 00:33:52.992 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:52.992 Entry Flags: 00:33:52.992 Duplicate Returned Information: 0 00:33:52.992 Explicit Persistent Connection Support for Discovery: 0 00:33:52.992 Transport Requirements: 00:33:52.992 Secure Channel: Not Specified 00:33:52.992 Port ID: 1 (0x0001) 00:33:52.992 Controller ID: 65535 (0xffff) 00:33:52.992 Admin Max SQ Size: 32 00:33:52.992 Transport Service Identifier: 4420 00:33:52.992 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:52.992 Transport Address: 10.0.0.1 00:33:52.992 Discovery Log Entry 1 00:33:52.992 ---------------------- 00:33:52.992 Transport Type: 3 (TCP) 00:33:52.992 Address Family: 1 (IPv4) 00:33:52.992 Subsystem Type: 2 (NVM Subsystem) 00:33:52.992 Entry Flags: 00:33:52.992 Duplicate Returned Information: 0 00:33:52.992 Explicit Persistent Connection Support for Discovery: 0 00:33:52.992 Transport Requirements: 00:33:52.992 Secure Channel: Not Specified 00:33:52.992 Port ID: 1 (0x0001) 00:33:52.992 Controller ID: 65535 (0xffff) 00:33:52.992 Admin Max SQ Size: 32 00:33:52.992 Transport Service Identifier: 4420 00:33:52.992 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:52.992 Transport Address: 10.0.0.1 00:33:52.992 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:53.250 get_feature(0x01) failed 00:33:53.250 get_feature(0x02) failed 00:33:53.250 get_feature(0x04) failed 00:33:53.250 ===================================================== 00:33:53.250 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:53.250 ===================================================== 00:33:53.250 Controller Capabilities/Features 00:33:53.250 ================================ 00:33:53.250 Vendor ID: 0000 00:33:53.250 Subsystem Vendor ID: 0000 00:33:53.250 Serial Number: a031b8e117771958fa16 00:33:53.250 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:53.250 Firmware Version: 6.8.9-20 00:33:53.250 Recommended Arb Burst: 6 00:33:53.250 IEEE OUI Identifier: 00 00 00 00:33:53.250 Multi-path I/O 00:33:53.250 May have multiple subsystem ports: Yes 00:33:53.250 May have multiple controllers: Yes 00:33:53.250 Associated with SR-IOV VF: No 00:33:53.250 Max Data Transfer Size: Unlimited 00:33:53.250 Max Number of Namespaces: 1024 00:33:53.250 Max Number of I/O Queues: 128 00:33:53.250 NVMe Specification Version (VS): 1.3 00:33:53.250 NVMe Specification Version (Identify): 1.3 00:33:53.250 Maximum Queue Entries: 1024 00:33:53.250 Contiguous Queues Required: No 00:33:53.251 Arbitration Mechanisms Supported 00:33:53.251 Weighted Round Robin: Not Supported 00:33:53.251 Vendor Specific: Not Supported 00:33:53.251 Reset Timeout: 7500 ms 00:33:53.251 Doorbell Stride: 4 bytes 00:33:53.251 NVM Subsystem Reset: Not Supported 00:33:53.251 Command Sets Supported 00:33:53.251 NVM Command Set: Supported 00:33:53.251 Boot Partition: Not Supported 00:33:53.251 Memory Page Size Minimum: 4096 bytes 00:33:53.251 Memory Page Size Maximum: 4096 bytes 00:33:53.251 Persistent Memory Region: Not Supported 00:33:53.251 Optional Asynchronous Events Supported 00:33:53.251 Namespace Attribute Notices: Supported 00:33:53.251 Firmware Activation Notices: Not Supported 00:33:53.251 ANA Change Notices: Supported 00:33:53.251 PLE Aggregate Log Change Notices: Not Supported 00:33:53.251 LBA Status Info Alert Notices: Not Supported 00:33:53.251 EGE Aggregate Log Change Notices: Not Supported 00:33:53.251 Normal NVM Subsystem Shutdown event: Not Supported 00:33:53.251 Zone Descriptor Change Notices: Not Supported 00:33:53.251 Discovery Log Change Notices: Not Supported 00:33:53.251 Controller Attributes 00:33:53.251 128-bit Host Identifier: Supported 00:33:53.251 Non-Operational Permissive Mode: Not Supported 00:33:53.251 NVM Sets: Not Supported 00:33:53.251 Read Recovery Levels: Not Supported 00:33:53.251 Endurance Groups: Not Supported 00:33:53.251 Predictable Latency Mode: Not Supported 00:33:53.251 Traffic Based Keep ALive: Supported 00:33:53.251 Namespace Granularity: Not Supported 00:33:53.251 SQ Associations: Not Supported 00:33:53.251 UUID List: Not Supported 00:33:53.251 Multi-Domain Subsystem: Not Supported 00:33:53.251 Fixed Capacity Management: Not Supported 00:33:53.251 Variable Capacity Management: Not Supported 00:33:53.251 Delete Endurance Group: Not Supported 00:33:53.251 Delete NVM Set: Not Supported 00:33:53.251 Extended LBA Formats Supported: Not Supported 00:33:53.251 Flexible Data Placement Supported: Not Supported 00:33:53.251 00:33:53.251 Controller Memory Buffer Support 00:33:53.251 ================================ 00:33:53.251 Supported: No 00:33:53.251 00:33:53.251 Persistent Memory Region Support 00:33:53.251 ================================ 00:33:53.251 Supported: No 00:33:53.251 00:33:53.251 Admin Command Set Attributes 00:33:53.251 ============================ 00:33:53.251 Security Send/Receive: Not Supported 00:33:53.251 Format NVM: Not Supported 00:33:53.251 Firmware Activate/Download: Not Supported 00:33:53.251 Namespace Management: Not Supported 00:33:53.251 Device Self-Test: Not Supported 00:33:53.251 Directives: Not Supported 00:33:53.251 NVMe-MI: Not Supported 00:33:53.251 Virtualization Management: Not Supported 00:33:53.251 Doorbell Buffer Config: Not Supported 00:33:53.251 Get LBA Status Capability: Not Supported 00:33:53.251 Command & Feature Lockdown Capability: Not Supported 00:33:53.251 Abort Command Limit: 4 00:33:53.251 Async Event Request Limit: 4 00:33:53.251 Number of Firmware Slots: N/A 00:33:53.251 Firmware Slot 1 Read-Only: N/A 00:33:53.251 Firmware Activation Without Reset: N/A 00:33:53.251 Multiple Update Detection Support: N/A 00:33:53.251 Firmware Update Granularity: No Information Provided 00:33:53.251 Per-Namespace SMART Log: Yes 00:33:53.251 Asymmetric Namespace Access Log Page: Supported 00:33:53.251 ANA Transition Time : 10 sec 00:33:53.251 00:33:53.251 Asymmetric Namespace Access Capabilities 00:33:53.251 ANA Optimized State : Supported 00:33:53.251 ANA Non-Optimized State : Supported 00:33:53.251 ANA Inaccessible State : Supported 00:33:53.251 ANA Persistent Loss State : Supported 00:33:53.251 ANA Change State : Supported 00:33:53.251 ANAGRPID is not changed : No 00:33:53.251 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:53.251 00:33:53.251 ANA Group Identifier Maximum : 128 00:33:53.251 Number of ANA Group Identifiers : 128 00:33:53.251 Max Number of Allowed Namespaces : 1024 00:33:53.251 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:53.251 Command Effects Log Page: Supported 00:33:53.251 Get Log Page Extended Data: Supported 00:33:53.251 Telemetry Log Pages: Not Supported 00:33:53.251 Persistent Event Log Pages: Not Supported 00:33:53.251 Supported Log Pages Log Page: May Support 00:33:53.251 Commands Supported & Effects Log Page: Not Supported 00:33:53.251 Feature Identifiers & Effects Log Page:May Support 00:33:53.251 NVMe-MI Commands & Effects Log Page: May Support 00:33:53.251 Data Area 4 for Telemetry Log: Not Supported 00:33:53.251 Error Log Page Entries Supported: 128 00:33:53.251 Keep Alive: Supported 00:33:53.251 Keep Alive Granularity: 1000 ms 00:33:53.251 00:33:53.251 NVM Command Set Attributes 00:33:53.251 ========================== 00:33:53.251 Submission Queue Entry Size 00:33:53.251 Max: 64 00:33:53.251 Min: 64 00:33:53.251 Completion Queue Entry Size 00:33:53.251 Max: 16 00:33:53.251 Min: 16 00:33:53.251 Number of Namespaces: 1024 00:33:53.251 Compare Command: Not Supported 00:33:53.251 Write Uncorrectable Command: Not Supported 00:33:53.251 Dataset Management Command: Supported 00:33:53.251 Write Zeroes Command: Supported 00:33:53.251 Set Features Save Field: Not Supported 00:33:53.251 Reservations: Not Supported 00:33:53.251 Timestamp: Not Supported 00:33:53.251 Copy: Not Supported 00:33:53.251 Volatile Write Cache: Present 00:33:53.251 Atomic Write Unit (Normal): 1 00:33:53.251 Atomic Write Unit (PFail): 1 00:33:53.251 Atomic Compare & Write Unit: 1 00:33:53.251 Fused Compare & Write: Not Supported 00:33:53.251 Scatter-Gather List 00:33:53.251 SGL Command Set: Supported 00:33:53.251 SGL Keyed: Not Supported 00:33:53.251 SGL Bit Bucket Descriptor: Not Supported 00:33:53.251 SGL Metadata Pointer: Not Supported 00:33:53.251 Oversized SGL: Not Supported 00:33:53.251 SGL Metadata Address: Not Supported 00:33:53.251 SGL Offset: Supported 00:33:53.251 Transport SGL Data Block: Not Supported 00:33:53.251 Replay Protected Memory Block: Not Supported 00:33:53.251 00:33:53.251 Firmware Slot Information 00:33:53.251 ========================= 00:33:53.251 Active slot: 0 00:33:53.251 00:33:53.251 Asymmetric Namespace Access 00:33:53.251 =========================== 00:33:53.251 Change Count : 0 00:33:53.251 Number of ANA Group Descriptors : 1 00:33:53.251 ANA Group Descriptor : 0 00:33:53.251 ANA Group ID : 1 00:33:53.251 Number of NSID Values : 1 00:33:53.251 Change Count : 0 00:33:53.251 ANA State : 1 00:33:53.251 Namespace Identifier : 1 00:33:53.251 00:33:53.251 Commands Supported and Effects 00:33:53.251 ============================== 00:33:53.251 Admin Commands 00:33:53.251 -------------- 00:33:53.251 Get Log Page (02h): Supported 00:33:53.251 Identify (06h): Supported 00:33:53.251 Abort (08h): Supported 00:33:53.251 Set Features (09h): Supported 00:33:53.251 Get Features (0Ah): Supported 00:33:53.251 Asynchronous Event Request (0Ch): Supported 00:33:53.251 Keep Alive (18h): Supported 00:33:53.251 I/O Commands 00:33:53.251 ------------ 00:33:53.251 Flush (00h): Supported 00:33:53.251 Write (01h): Supported LBA-Change 00:33:53.251 Read (02h): Supported 00:33:53.251 Write Zeroes (08h): Supported LBA-Change 00:33:53.251 Dataset Management (09h): Supported 00:33:53.251 00:33:53.251 Error Log 00:33:53.251 ========= 00:33:53.251 Entry: 0 00:33:53.251 Error Count: 0x3 00:33:53.251 Submission Queue Id: 0x0 00:33:53.251 Command Id: 0x5 00:33:53.251 Phase Bit: 0 00:33:53.251 Status Code: 0x2 00:33:53.251 Status Code Type: 0x0 00:33:53.251 Do Not Retry: 1 00:33:53.251 Error Location: 0x28 00:33:53.251 LBA: 0x0 00:33:53.251 Namespace: 0x0 00:33:53.251 Vendor Log Page: 0x0 00:33:53.251 ----------- 00:33:53.251 Entry: 1 00:33:53.251 Error Count: 0x2 00:33:53.251 Submission Queue Id: 0x0 00:33:53.251 Command Id: 0x5 00:33:53.251 Phase Bit: 0 00:33:53.251 Status Code: 0x2 00:33:53.251 Status Code Type: 0x0 00:33:53.251 Do Not Retry: 1 00:33:53.251 Error Location: 0x28 00:33:53.251 LBA: 0x0 00:33:53.251 Namespace: 0x0 00:33:53.251 Vendor Log Page: 0x0 00:33:53.251 ----------- 00:33:53.251 Entry: 2 00:33:53.251 Error Count: 0x1 00:33:53.251 Submission Queue Id: 0x0 00:33:53.251 Command Id: 0x4 00:33:53.251 Phase Bit: 0 00:33:53.251 Status Code: 0x2 00:33:53.251 Status Code Type: 0x0 00:33:53.251 Do Not Retry: 1 00:33:53.252 Error Location: 0x28 00:33:53.252 LBA: 0x0 00:33:53.252 Namespace: 0x0 00:33:53.252 Vendor Log Page: 0x0 00:33:53.252 00:33:53.252 Number of Queues 00:33:53.252 ================ 00:33:53.252 Number of I/O Submission Queues: 128 00:33:53.252 Number of I/O Completion Queues: 128 00:33:53.252 00:33:53.252 ZNS Specific Controller Data 00:33:53.252 ============================ 00:33:53.252 Zone Append Size Limit: 0 00:33:53.252 00:33:53.252 00:33:53.252 Active Namespaces 00:33:53.252 ================= 00:33:53.252 get_feature(0x05) failed 00:33:53.252 Namespace ID:1 00:33:53.252 Command Set Identifier: NVM (00h) 00:33:53.252 Deallocate: Supported 00:33:53.252 Deallocated/Unwritten Error: Not Supported 00:33:53.252 Deallocated Read Value: Unknown 00:33:53.252 Deallocate in Write Zeroes: Not Supported 00:33:53.252 Deallocated Guard Field: 0xFFFF 00:33:53.252 Flush: Supported 00:33:53.252 Reservation: Not Supported 00:33:53.252 Namespace Sharing Capabilities: Multiple Controllers 00:33:53.252 Size (in LBAs): 1953525168 (931GiB) 00:33:53.252 Capacity (in LBAs): 1953525168 (931GiB) 00:33:53.252 Utilization (in LBAs): 1953525168 (931GiB) 00:33:53.252 UUID: eb205d37-d8b4-4c98-9b28-ef887c77e2d6 00:33:53.252 Thin Provisioning: Not Supported 00:33:53.252 Per-NS Atomic Units: Yes 00:33:53.252 Atomic Boundary Size (Normal): 0 00:33:53.252 Atomic Boundary Size (PFail): 0 00:33:53.252 Atomic Boundary Offset: 0 00:33:53.252 NGUID/EUI64 Never Reused: No 00:33:53.252 ANA group ID: 1 00:33:53.252 Namespace Write Protected: No 00:33:53.252 Number of LBA Formats: 1 00:33:53.252 Current LBA Format: LBA Format #00 00:33:53.252 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:53.252 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:53.252 rmmod nvme_tcp 00:33:53.252 rmmod nvme_fabrics 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:53.252 22:56:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.154 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:55.154 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:55.154 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:55.154 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:33:55.154 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:55.154 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:55.154 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:55.154 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:55.154 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:33:55.154 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:33:55.412 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:56.786 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:56.786 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:56.786 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:56.786 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:56.786 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:56.786 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:56.786 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:56.786 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:56.786 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:56.786 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:56.786 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:56.786 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:56.786 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:56.786 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:56.786 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:56.786 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:57.719 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:57.719 00:33:57.719 real 0m9.893s 00:33:57.719 user 0m2.215s 00:33:57.719 sys 0m3.668s 00:33:57.719 22:57:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:57.719 22:57:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:57.719 ************************************ 00:33:57.719 END TEST nvmf_identify_kernel_target 00:33:57.719 ************************************ 00:33:57.719 22:57:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:57.719 22:57:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:57.719 22:57:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:57.719 22:57:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.719 ************************************ 00:33:57.719 START TEST nvmf_auth_host 00:33:57.719 ************************************ 00:33:57.719 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:57.977 * Looking for test storage... 00:33:57.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:57.977 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:57.977 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:33:57.977 22:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:57.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.977 --rc genhtml_branch_coverage=1 00:33:57.977 --rc genhtml_function_coverage=1 00:33:57.977 --rc genhtml_legend=1 00:33:57.977 --rc geninfo_all_blocks=1 00:33:57.977 --rc geninfo_unexecuted_blocks=1 00:33:57.977 00:33:57.977 ' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:57.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.977 --rc genhtml_branch_coverage=1 00:33:57.977 --rc genhtml_function_coverage=1 00:33:57.977 --rc genhtml_legend=1 00:33:57.977 --rc geninfo_all_blocks=1 00:33:57.977 --rc geninfo_unexecuted_blocks=1 00:33:57.977 00:33:57.977 ' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:57.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.977 --rc genhtml_branch_coverage=1 00:33:57.977 --rc genhtml_function_coverage=1 00:33:57.977 --rc genhtml_legend=1 00:33:57.977 --rc geninfo_all_blocks=1 00:33:57.977 --rc geninfo_unexecuted_blocks=1 00:33:57.977 00:33:57.977 ' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:57.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.977 --rc genhtml_branch_coverage=1 00:33:57.977 --rc genhtml_function_coverage=1 00:33:57.977 --rc genhtml_legend=1 00:33:57.977 --rc geninfo_all_blocks=1 00:33:57.977 --rc geninfo_unexecuted_blocks=1 00:33:57.977 00:33:57.977 ' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:57.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:57.977 22:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.877 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:59.877 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:59.877 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:59.877 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:59.877 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:59.877 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:59.877 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:59.877 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:59.877 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:59.878 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:59.878 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:59.878 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:59.878 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:59.878 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:00.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:00.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:34:00.136 00:34:00.136 --- 10.0.0.2 ping statistics --- 00:34:00.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.136 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:00.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:00.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:34:00.136 00:34:00.136 --- 10.0.0.1 ping statistics --- 00:34:00.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.136 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=380550 00:34:00.136 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:00.137 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 380550 00:34:00.137 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 380550 ']' 00:34:00.137 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.137 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:00.137 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.137 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:00.137 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=5eb371edac79c515abc37a497d955282 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.aC6 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 5eb371edac79c515abc37a497d955282 0 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 5eb371edac79c515abc37a497d955282 0 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=5eb371edac79c515abc37a497d955282 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:00.394 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.aC6 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.aC6 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.aC6 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=55b9cbff33de35c71bb3a8737cca57d80c633fd8b80a20c96dc366f2d3290662 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.wWS 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 55b9cbff33de35c71bb3a8737cca57d80c633fd8b80a20c96dc366f2d3290662 3 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 55b9cbff33de35c71bb3a8737cca57d80c633fd8b80a20c96dc366f2d3290662 3 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=55b9cbff33de35c71bb3a8737cca57d80c633fd8b80a20c96dc366f2d3290662 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:34:00.395 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.wWS 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.wWS 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.wWS 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=607a412cb9dc57dc9ce77f4fd2cb6e7ca20e0c452a6db81f 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.stZ 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 607a412cb9dc57dc9ce77f4fd2cb6e7ca20e0c452a6db81f 0 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 607a412cb9dc57dc9ce77f4fd2cb6e7ca20e0c452a6db81f 0 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=607a412cb9dc57dc9ce77f4fd2cb6e7ca20e0c452a6db81f 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.stZ 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.stZ 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.stZ 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e9462180372f75ce411fd455aa5e08e97c21562c7fee7b29 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.0oB 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e9462180372f75ce411fd455aa5e08e97c21562c7fee7b29 2 00:34:00.652 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e9462180372f75ce411fd455aa5e08e97c21562c7fee7b29 2 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e9462180372f75ce411fd455aa5e08e97c21562c7fee7b29 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.0oB 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.0oB 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.0oB 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=483977c47daf23f6b5f9eac65c4ccb5c 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.o3T 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 483977c47daf23f6b5f9eac65c4ccb5c 1 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 483977c47daf23f6b5f9eac65c4ccb5c 1 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=483977c47daf23f6b5f9eac65c4ccb5c 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.o3T 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.o3T 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.o3T 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6649a968692575044be7e3ceb4d23a51 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.uHQ 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6649a968692575044be7e3ceb4d23a51 1 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6649a968692575044be7e3ceb4d23a51 1 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6649a968692575044be7e3ceb4d23a51 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.uHQ 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.uHQ 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.uHQ 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=07298f648ac79a5304bd6a22e9f177903abbfb1d7037524a 00:34:00.653 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.d8y 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 07298f648ac79a5304bd6a22e9f177903abbfb1d7037524a 2 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 07298f648ac79a5304bd6a22e9f177903abbfb1d7037524a 2 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=07298f648ac79a5304bd6a22e9f177903abbfb1d7037524a 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.d8y 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.d8y 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.d8y 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0f08b490e4b56d8529b99e25bde48b18 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.oDY 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0f08b490e4b56d8529b99e25bde48b18 0 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0f08b490e4b56d8529b99e25bde48b18 0 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0f08b490e4b56d8529b99e25bde48b18 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:34:00.911 22:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.oDY 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.oDY 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.oDY 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2e7a09be6b0443149e7fa0e99bc04eed067368295e8759686d285e0a3a686ac5 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.StX 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2e7a09be6b0443149e7fa0e99bc04eed067368295e8759686d285e0a3a686ac5 3 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2e7a09be6b0443149e7fa0e99bc04eed067368295e8759686d285e0a3a686ac5 3 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2e7a09be6b0443149e7fa0e99bc04eed067368295e8759686d285e0a3a686ac5 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.StX 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.StX 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.StX 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 380550 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 380550 ']' 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:00.911 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aC6 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.wWS ]] 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wWS 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.stZ 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.0oB ]] 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0oB 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.o3T 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.uHQ ]] 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uHQ 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.169 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.d8y 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.oDY ]] 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.oDY 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.StX 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:01.427 22:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:02.361 Waiting for block devices as requested 00:34:02.361 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:02.619 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:02.619 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:02.876 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:02.876 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:02.876 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:03.132 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:03.132 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:03.132 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:03.132 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:03.390 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:03.390 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:03.390 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:03.390 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:03.647 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:03.647 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:03.647 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:03.905 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:34:03.905 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:03.905 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:34:03.905 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:03.905 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:03.905 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:03.905 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:34:03.905 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:03.905 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:04.163 No valid GPT data, bailing 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:04.163 00:34:04.163 Discovery Log Number of Records 2, Generation counter 2 00:34:04.163 =====Discovery Log Entry 0====== 00:34:04.163 trtype: tcp 00:34:04.163 adrfam: ipv4 00:34:04.163 subtype: current discovery subsystem 00:34:04.163 treq: not specified, sq flow control disable supported 00:34:04.163 portid: 1 00:34:04.163 trsvcid: 4420 00:34:04.163 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:04.163 traddr: 10.0.0.1 00:34:04.163 eflags: none 00:34:04.163 sectype: none 00:34:04.163 =====Discovery Log Entry 1====== 00:34:04.163 trtype: tcp 00:34:04.163 adrfam: ipv4 00:34:04.163 subtype: nvme subsystem 00:34:04.163 treq: not specified, sq flow control disable supported 00:34:04.163 portid: 1 00:34:04.163 trsvcid: 4420 00:34:04.163 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:04.163 traddr: 10.0.0.1 00:34:04.163 eflags: none 00:34:04.163 sectype: none 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.163 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.422 nvme0n1 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.422 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.680 nvme0n1 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:04.680 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.681 22:57:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.939 nvme0n1 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.939 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.197 nvme0n1 00:34:05.197 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.197 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.197 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.197 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.197 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.197 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.197 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.198 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.456 nvme0n1 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.456 nvme0n1 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.456 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.714 22:57:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.973 nvme0n1 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.973 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:06.231 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.232 nvme0n1 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.232 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.490 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.491 nvme0n1 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.491 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.749 nvme0n1 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.749 22:57:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.007 nvme0n1 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.007 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.265 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:07.831 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:07.831 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.832 22:57:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.090 nvme0n1 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.090 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.349 nvme0n1 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:08.349 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.350 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.608 nvme0n1 00:34:08.608 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.608 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.608 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.608 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.608 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.608 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.608 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.608 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.608 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.608 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:08.866 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.867 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.867 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:08.867 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.867 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:08.867 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:08.867 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:08.867 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:08.867 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.867 22:57:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.125 nvme0n1 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.125 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.383 nvme0n1 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.383 22:57:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.281 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:11.281 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:11.281 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:11.281 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:11.281 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.281 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.281 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.281 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:11.281 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.282 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.847 nvme0n1 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.847 22:57:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.413 nvme0n1 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.413 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.979 nvme0n1 00:34:12.979 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.979 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.979 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.979 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.979 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.979 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.979 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.545 nvme0n1 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.545 22:57:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.110 nvme0n1 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:14.110 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.111 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.044 nvme0n1 00:34:15.044 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.044 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.044 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.044 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.044 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.044 22:57:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.044 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.610 nvme0n1 00:34:15.610 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.610 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.610 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.610 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.610 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.610 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.868 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:15.869 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.869 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:15.869 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:15.869 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:15.869 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.869 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.869 22:57:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.802 nvme0n1 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:16.802 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.803 22:57:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.736 nvme0n1 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.736 22:57:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.669 nvme0n1 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:18.669 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.670 nvme0n1 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.670 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.929 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.929 nvme0n1 00:34:18.929 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.929 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.929 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.929 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.929 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.929 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.929 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.929 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.929 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.929 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.188 nvme0n1 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.188 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.447 nvme0n1 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.447 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.705 nvme0n1 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:19.705 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.706 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.964 nvme0n1 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.964 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.222 nvme0n1 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.222 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.481 nvme0n1 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.481 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.740 nvme0n1 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.740 22:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.998 nvme0n1 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.998 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.256 nvme0n1 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:21.256 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:21.257 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.257 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.257 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.515 nvme0n1 00:34:21.515 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.515 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.515 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.515 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.515 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.515 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.515 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.515 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.515 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.515 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.773 22:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.031 nvme0n1 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.031 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.289 nvme0n1 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.290 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.548 nvme0n1 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:22.548 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.549 22:57:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.113 nvme0n1 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.113 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.679 nvme0n1 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.679 22:57:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.245 nvme0n1 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.245 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.811 nvme0n1 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.811 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.812 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.377 nvme0n1 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.377 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.311 nvme0n1 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.311 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.245 nvme0n1 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.246 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.262 nvme0n1 00:34:28.262 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.262 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.262 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.262 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.262 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.263 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.229 nvme0n1 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.229 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.794 nvme0n1 00:34:29.794 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.794 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.794 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.794 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.794 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.794 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.052 nvme0n1 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:30.052 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.053 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:30.053 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.053 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.310 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.310 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.310 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:30.310 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:30.310 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.311 nvme0n1 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.311 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.568 nvme0n1 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.568 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.569 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.826 nvme0n1 00:34:30.826 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.826 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.827 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.827 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.085 nvme0n1 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.085 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.086 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.344 nvme0n1 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.344 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.603 nvme0n1 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.603 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.862 nvme0n1 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:31.862 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.862 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.120 nvme0n1 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:32.120 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.121 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.379 nvme0n1 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.379 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.637 nvme0n1 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.637 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.638 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.895 nvme0n1 00:34:32.895 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.895 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.895 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.895 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.895 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.895 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.152 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.153 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.410 nvme0n1 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:33.410 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.411 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.668 nvme0n1 00:34:33.668 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.668 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.668 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.668 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.668 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.668 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.668 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.668 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.669 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.927 nvme0n1 00:34:33.927 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.927 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.927 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.927 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.927 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:34.185 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.186 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.751 nvme0n1 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.751 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.752 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 nvme0n1 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.576 nvme0n1 00:34:35.576 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.576 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.576 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.576 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.576 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.834 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.834 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.834 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.834 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.834 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.834 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.834 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.834 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:35.834 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.834 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.834 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:35.834 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.835 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.401 nvme0n1 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.401 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.659 nvme0n1 00:34:36.659 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.659 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.659 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.659 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.659 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.659 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWViMzcxZWRhYzc5YzUxNWFiYzM3YTQ5N2Q5NTUyODLNw9dU: 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: ]] 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTViOWNiZmYzM2RlMzVjNzFiYjNhODczN2NjYTU3ZDgwYzYzM2ZkOGI4MGEyMGM5NmRjMzY2ZjJkMzI5MDY2MgTjtnI=: 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.918 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.852 nvme0n1 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.852 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.787 nvme0n1 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.787 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.721 nvme0n1 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDcyOThmNjQ4YWM3OWE1MzA0YmQ2YTIyZTlmMTc3OTAzYWJiZmIxZDcwMzc1MjRhIf+HCw==: 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: ]] 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGYwOGI0OTBlNGI1NmQ4NTI5Yjk5ZTI1YmRlNDhiMTjfZcxw: 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:39.721 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:39.722 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:39.722 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.722 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.656 nvme0n1 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTA5YmU2YjA0NDMxNDllN2ZhMGU5OWJjMDRlZWQwNjczNjgyOTVlODc1OTY4NmQyODVlMGEzYTY4NmFjNS0KddM=: 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.656 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.590 nvme0n1 00:34:41.590 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.590 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.590 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.590 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.590 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.590 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.590 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 request: 00:34:41.591 { 00:34:41.591 "name": "nvme0", 00:34:41.591 "trtype": "tcp", 00:34:41.591 "traddr": "10.0.0.1", 00:34:41.591 "adrfam": "ipv4", 00:34:41.591 "trsvcid": "4420", 00:34:41.591 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:41.591 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:41.591 "prchk_reftag": false, 00:34:41.591 "prchk_guard": false, 00:34:41.591 "hdgst": false, 00:34:41.591 "ddgst": false, 00:34:41.591 "allow_unrecognized_csi": false, 00:34:41.591 "method": "bdev_nvme_attach_controller", 00:34:41.591 "req_id": 1 00:34:41.591 } 00:34:41.591 Got JSON-RPC error response 00:34:41.591 response: 00:34:41.591 { 00:34:41.591 "code": -5, 00:34:41.591 "message": "Input/output error" 00:34:41.591 } 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 request: 00:34:41.591 { 00:34:41.591 "name": "nvme0", 00:34:41.591 "trtype": "tcp", 00:34:41.591 "traddr": "10.0.0.1", 00:34:41.591 "adrfam": "ipv4", 00:34:41.591 "trsvcid": "4420", 00:34:41.591 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:41.591 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:41.591 "prchk_reftag": false, 00:34:41.591 "prchk_guard": false, 00:34:41.591 "hdgst": false, 00:34:41.591 "ddgst": false, 00:34:41.591 "dhchap_key": "key2", 00:34:41.591 "allow_unrecognized_csi": false, 00:34:41.591 "method": "bdev_nvme_attach_controller", 00:34:41.591 "req_id": 1 00:34:41.591 } 00:34:41.591 Got JSON-RPC error response 00:34:41.591 response: 00:34:41.591 { 00:34:41.591 "code": -5, 00:34:41.591 "message": "Input/output error" 00:34:41.591 } 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:41.591 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:41.592 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:41.592 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:41.592 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:41.592 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.592 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:41.592 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.592 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:41.592 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.592 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.850 request: 00:34:41.850 { 00:34:41.850 "name": "nvme0", 00:34:41.850 "trtype": "tcp", 00:34:41.850 "traddr": "10.0.0.1", 00:34:41.850 "adrfam": "ipv4", 00:34:41.850 "trsvcid": "4420", 00:34:41.850 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:41.850 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:41.850 "prchk_reftag": false, 00:34:41.850 "prchk_guard": false, 00:34:41.850 "hdgst": false, 00:34:41.850 "ddgst": false, 00:34:41.850 "dhchap_key": "key1", 00:34:41.850 "dhchap_ctrlr_key": "ckey2", 00:34:41.850 "allow_unrecognized_csi": false, 00:34:41.850 "method": "bdev_nvme_attach_controller", 00:34:41.850 "req_id": 1 00:34:41.850 } 00:34:41.850 Got JSON-RPC error response 00:34:41.850 response: 00:34:41.850 { 00:34:41.850 "code": -5, 00:34:41.850 "message": "Input/output error" 00:34:41.850 } 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.850 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.850 nvme0n1 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.850 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.108 request: 00:34:42.108 { 00:34:42.108 "name": "nvme0", 00:34:42.108 "dhchap_key": "key1", 00:34:42.108 "dhchap_ctrlr_key": "ckey2", 00:34:42.108 "method": "bdev_nvme_set_keys", 00:34:42.108 "req_id": 1 00:34:42.108 } 00:34:42.108 Got JSON-RPC error response 00:34:42.108 response: 00:34:42.108 { 00:34:42.108 "code": -13, 00:34:42.108 "message": "Permission denied" 00:34:42.108 } 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:42.108 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:43.042 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.042 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.042 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.042 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:43.042 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA3YTQxMmNiOWRjNTdkYzljZTc3ZjRmZDJjYjZlN2NhMjBlMGM0NTJhNmRiODFm3LqyRA==: 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: ]] 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0NjIxODAzNzJmNzVjZTQxMWZkNDU1YWE1ZTA4ZTk3YzIxNTYyYzdmZWU3YjI5qAZr1Q==: 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:43.299 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.300 nvme0n1 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzOTc3YzQ3ZGFmMjNmNmI1ZjllYWM2NWM0Y2NiNWMYRrFd: 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: ]] 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0OWE5Njg2OTI1NzUwNDRiZTdlM2NlYjRkMjNhNTF+GLgf: 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.300 request: 00:34:43.300 { 00:34:43.300 "name": "nvme0", 00:34:43.300 "dhchap_key": "key2", 00:34:43.300 "dhchap_ctrlr_key": "ckey1", 00:34:43.300 "method": "bdev_nvme_set_keys", 00:34:43.300 "req_id": 1 00:34:43.300 } 00:34:43.300 Got JSON-RPC error response 00:34:43.300 response: 00:34:43.300 { 00:34:43.300 "code": -13, 00:34:43.300 "message": "Permission denied" 00:34:43.300 } 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.300 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.557 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:43.557 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:44.491 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:44.492 rmmod nvme_tcp 00:34:44.492 rmmod nvme_fabrics 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 380550 ']' 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 380550 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 380550 ']' 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 380550 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 380550 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 380550' 00:34:44.492 killing process with pid 380550 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 380550 00:34:44.492 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 380550 00:34:44.750 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:44.750 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:44.750 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:44.750 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:44.750 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:34:44.750 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:44.750 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:34:44.750 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:44.750 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:44.750 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.750 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:44.750 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.661 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:46.920 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:46.921 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:46.921 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:46.921 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:46.921 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:34:46.921 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:46.921 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:46.921 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:46.921 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:46.921 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:34:46.921 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:34:46.921 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:48.302 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:48.302 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:48.302 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:48.302 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:48.302 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:48.302 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:48.302 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:48.302 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:48.302 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:48.302 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:48.302 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:48.302 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:48.302 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:48.302 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:48.302 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:48.302 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:49.242 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:49.242 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.aC6 /tmp/spdk.key-null.stZ /tmp/spdk.key-sha256.o3T /tmp/spdk.key-sha384.d8y /tmp/spdk.key-sha512.StX /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:49.242 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:50.621 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:50.622 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:50.622 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:50.622 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:50.622 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:50.622 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:50.622 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:50.622 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:50.622 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:50.622 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:50.622 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:50.622 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:50.622 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:50.622 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:50.622 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:50.622 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:50.622 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:50.622 00:34:50.622 real 0m52.841s 00:34:50.622 user 0m50.567s 00:34:50.622 sys 0m6.202s 00:34:50.622 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:50.622 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.622 ************************************ 00:34:50.622 END TEST nvmf_auth_host 00:34:50.622 ************************************ 00:34:50.622 22:57:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:50.622 22:57:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:50.622 22:57:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:50.622 22:57:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:50.622 22:57:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.622 ************************************ 00:34:50.622 START TEST nvmf_digest 00:34:50.622 ************************************ 00:34:50.622 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:50.622 * Looking for test storage... 00:34:50.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:50.622 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:50.622 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:34:50.622 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:50.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.881 --rc genhtml_branch_coverage=1 00:34:50.881 --rc genhtml_function_coverage=1 00:34:50.881 --rc genhtml_legend=1 00:34:50.881 --rc geninfo_all_blocks=1 00:34:50.881 --rc geninfo_unexecuted_blocks=1 00:34:50.881 00:34:50.881 ' 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:50.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.881 --rc genhtml_branch_coverage=1 00:34:50.881 --rc genhtml_function_coverage=1 00:34:50.881 --rc genhtml_legend=1 00:34:50.881 --rc geninfo_all_blocks=1 00:34:50.881 --rc geninfo_unexecuted_blocks=1 00:34:50.881 00:34:50.881 ' 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:50.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.881 --rc genhtml_branch_coverage=1 00:34:50.881 --rc genhtml_function_coverage=1 00:34:50.881 --rc genhtml_legend=1 00:34:50.881 --rc geninfo_all_blocks=1 00:34:50.881 --rc geninfo_unexecuted_blocks=1 00:34:50.881 00:34:50.881 ' 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:50.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.881 --rc genhtml_branch_coverage=1 00:34:50.881 --rc genhtml_function_coverage=1 00:34:50.881 --rc genhtml_legend=1 00:34:50.881 --rc geninfo_all_blocks=1 00:34:50.881 --rc geninfo_unexecuted_blocks=1 00:34:50.881 00:34:50.881 ' 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:50.881 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:50.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:50.882 22:57:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:53.421 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:53.422 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:53.422 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:53.422 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:53.422 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:53.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:34:53.422 00:34:53.422 --- 10.0.0.2 ping statistics --- 00:34:53.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.422 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:53.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:34:53.422 00:34:53.422 --- 10.0.0.1 ping statistics --- 00:34:53.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.422 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:53.422 ************************************ 00:34:53.422 START TEST nvmf_digest_clean 00:34:53.422 ************************************ 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=390810 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 390810 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 390810 ']' 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:53.422 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:53.422 [2024-10-11 22:57:56.340858] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:34:53.422 [2024-10-11 22:57:56.340944] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.422 [2024-10-11 22:57:56.417839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.422 [2024-10-11 22:57:56.469254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.422 [2024-10-11 22:57:56.469310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.423 [2024-10-11 22:57:56.469323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.423 [2024-10-11 22:57:56.469334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.423 [2024-10-11 22:57:56.469344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.423 [2024-10-11 22:57:56.469974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.423 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:53.423 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:53.423 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:53.423 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:53.423 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:53.423 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.423 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:53.423 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:53.423 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:53.423 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.423 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:53.681 null0 00:34:53.681 [2024-10-11 22:57:56.707633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.681 [2024-10-11 22:57:56.731878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=390832 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 390832 /var/tmp/bperf.sock 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 390832 ']' 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:53.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:53.681 22:57:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:53.681 [2024-10-11 22:57:56.784189] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:34:53.681 [2024-10-11 22:57:56.784281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390832 ] 00:34:53.681 [2024-10-11 22:57:56.846798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.681 [2024-10-11 22:57:56.893749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.939 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:53.939 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:53.939 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:53.939 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:53.939 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:54.198 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:54.198 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:54.764 nvme0n1 00:34:54.764 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:54.764 22:57:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:54.764 Running I/O for 2 seconds... 00:34:57.074 18194.00 IOPS, 71.07 MiB/s [2024-10-11T20:58:00.342Z] 18433.00 IOPS, 72.00 MiB/s 00:34:57.074 Latency(us) 00:34:57.074 [2024-10-11T20:58:00.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.074 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:57.074 nvme0n1 : 2.01 18440.64 72.03 0.00 0.00 6930.54 3519.53 17185.00 00:34:57.074 [2024-10-11T20:58:00.342Z] =================================================================================================================== 00:34:57.074 [2024-10-11T20:58:00.342Z] Total : 18440.64 72.03 0.00 0.00 6930.54 3519.53 17185.00 00:34:57.074 { 00:34:57.074 "results": [ 00:34:57.074 { 00:34:57.074 "job": "nvme0n1", 00:34:57.074 "core_mask": "0x2", 00:34:57.074 "workload": "randread", 00:34:57.074 "status": "finished", 00:34:57.074 "queue_depth": 128, 00:34:57.074 "io_size": 4096, 00:34:57.074 "runtime": 2.008715, 00:34:57.074 "iops": 18440.644889892294, 00:34:57.074 "mibps": 72.03376910114177, 00:34:57.074 "io_failed": 0, 00:34:57.074 "io_timeout": 0, 00:34:57.074 "avg_latency_us": 6930.540742420515, 00:34:57.074 "min_latency_us": 3519.525925925926, 00:34:57.074 "max_latency_us": 17184.995555555557 00:34:57.074 } 00:34:57.074 ], 00:34:57.074 "core_count": 1 00:34:57.074 } 00:34:57.074 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:57.074 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:57.074 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:57.074 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:57.074 | select(.opcode=="crc32c") 00:34:57.074 | "\(.module_name) \(.executed)"' 00:34:57.074 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 390832 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 390832 ']' 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 390832 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 390832 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 390832' 00:34:57.332 killing process with pid 390832 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 390832 00:34:57.332 Received shutdown signal, test time was about 2.000000 seconds 00:34:57.332 00:34:57.332 Latency(us) 00:34:57.332 [2024-10-11T20:58:00.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.332 [2024-10-11T20:58:00.600Z] =================================================================================================================== 00:34:57.332 [2024-10-11T20:58:00.600Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 390832 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:57.332 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=391360 00:34:57.333 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:57.333 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 391360 /var/tmp/bperf.sock 00:34:57.333 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 391360 ']' 00:34:57.333 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.333 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:57.333 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.333 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:57.333 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:57.590 [2024-10-11 22:58:00.638718] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:34:57.590 [2024-10-11 22:58:00.638817] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391360 ] 00:34:57.590 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:57.590 Zero copy mechanism will not be used. 00:34:57.590 [2024-10-11 22:58:00.699402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.590 [2024-10-11 22:58:00.746948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.849 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:57.849 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:57.849 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:57.849 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:57.849 22:58:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:58.108 22:58:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.108 22:58:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.366 nvme0n1 00:34:58.366 22:58:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:58.366 22:58:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:58.623 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:58.623 Zero copy mechanism will not be used. 00:34:58.623 Running I/O for 2 seconds... 00:35:00.491 5908.00 IOPS, 738.50 MiB/s [2024-10-11T20:58:03.759Z] 6083.00 IOPS, 760.38 MiB/s 00:35:00.491 Latency(us) 00:35:00.491 [2024-10-11T20:58:03.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.491 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:00.491 nvme0n1 : 2.00 6084.75 760.59 0.00 0.00 2624.83 591.64 9126.49 00:35:00.491 [2024-10-11T20:58:03.759Z] =================================================================================================================== 00:35:00.491 [2024-10-11T20:58:03.759Z] Total : 6084.75 760.59 0.00 0.00 2624.83 591.64 9126.49 00:35:00.491 { 00:35:00.491 "results": [ 00:35:00.491 { 00:35:00.491 "job": "nvme0n1", 00:35:00.491 "core_mask": "0x2", 00:35:00.491 "workload": "randread", 00:35:00.491 "status": "finished", 00:35:00.491 "queue_depth": 16, 00:35:00.491 "io_size": 131072, 00:35:00.491 "runtime": 2.004192, 00:35:00.491 "iops": 6084.746371605116, 00:35:00.491 "mibps": 760.5932964506395, 00:35:00.491 "io_failed": 0, 00:35:00.491 "io_timeout": 0, 00:35:00.491 "avg_latency_us": 2624.8333363096594, 00:35:00.491 "min_latency_us": 591.6444444444444, 00:35:00.491 "max_latency_us": 9126.494814814814 00:35:00.491 } 00:35:00.491 ], 00:35:00.491 "core_count": 1 00:35:00.491 } 00:35:00.491 22:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:00.491 22:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:00.491 22:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:00.491 22:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:00.491 | select(.opcode=="crc32c") 00:35:00.491 | "\(.module_name) \(.executed)"' 00:35:00.491 22:58:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:00.749 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:00.749 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:00.749 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:00.749 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:00.749 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 391360 00:35:00.749 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 391360 ']' 00:35:00.749 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 391360 00:35:00.749 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:00.749 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:00.749 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 391360 00:35:01.007 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:01.007 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:01.007 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 391360' 00:35:01.007 killing process with pid 391360 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 391360 00:35:01.008 Received shutdown signal, test time was about 2.000000 seconds 00:35:01.008 00:35:01.008 Latency(us) 00:35:01.008 [2024-10-11T20:58:04.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.008 [2024-10-11T20:58:04.276Z] =================================================================================================================== 00:35:01.008 [2024-10-11T20:58:04.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 391360 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=391760 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 391760 /var/tmp/bperf.sock 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 391760 ']' 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:01.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:01.008 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:01.266 [2024-10-11 22:58:04.289504] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:01.266 [2024-10-11 22:58:04.289626] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391760 ] 00:35:01.266 [2024-10-11 22:58:04.349739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.266 [2024-10-11 22:58:04.392872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.266 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:01.266 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:01.266 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:01.266 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:01.266 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:01.832 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:01.832 22:58:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:02.398 nvme0n1 00:35:02.398 22:58:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:02.398 22:58:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:02.398 Running I/O for 2 seconds... 00:35:04.265 21576.00 IOPS, 84.28 MiB/s [2024-10-11T20:58:07.533Z] 20208.00 IOPS, 78.94 MiB/s 00:35:04.265 Latency(us) 00:35:04.265 [2024-10-11T20:58:07.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.265 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:04.265 nvme0n1 : 2.01 20204.46 78.92 0.00 0.00 6320.84 2839.89 13786.83 00:35:04.265 [2024-10-11T20:58:07.533Z] =================================================================================================================== 00:35:04.265 [2024-10-11T20:58:07.533Z] Total : 20204.46 78.92 0.00 0.00 6320.84 2839.89 13786.83 00:35:04.265 { 00:35:04.265 "results": [ 00:35:04.265 { 00:35:04.265 "job": "nvme0n1", 00:35:04.265 "core_mask": "0x2", 00:35:04.265 "workload": "randwrite", 00:35:04.265 "status": "finished", 00:35:04.265 "queue_depth": 128, 00:35:04.265 "io_size": 4096, 00:35:04.265 "runtime": 2.006686, 00:35:04.265 "iops": 20204.4565019141, 00:35:04.265 "mibps": 78.92365821060196, 00:35:04.265 "io_failed": 0, 00:35:04.265 "io_timeout": 0, 00:35:04.265 "avg_latency_us": 6320.841977257447, 00:35:04.265 "min_latency_us": 2839.8933333333334, 00:35:04.265 "max_latency_us": 13786.832592592593 00:35:04.265 } 00:35:04.265 ], 00:35:04.265 "core_count": 1 00:35:04.265 } 00:35:04.523 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:04.523 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:04.523 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:04.523 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:04.523 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:04.523 | select(.opcode=="crc32c") 00:35:04.523 | "\(.module_name) \(.executed)"' 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 391760 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 391760 ']' 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 391760 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 391760 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 391760' 00:35:04.782 killing process with pid 391760 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 391760 00:35:04.782 Received shutdown signal, test time was about 2.000000 seconds 00:35:04.782 00:35:04.782 Latency(us) 00:35:04.782 [2024-10-11T20:58:08.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.782 [2024-10-11T20:58:08.050Z] =================================================================================================================== 00:35:04.782 [2024-10-11T20:58:08.050Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:04.782 22:58:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 391760 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=392172 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 392172 /var/tmp/bperf.sock 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 392172 ']' 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:05.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:05.041 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.041 [2024-10-11 22:58:08.102325] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:05.041 [2024-10-11 22:58:08.102420] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392172 ] 00:35:05.041 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:05.041 Zero copy mechanism will not be used. 00:35:05.041 [2024-10-11 22:58:08.161570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.041 [2024-10-11 22:58:08.205434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.299 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:05.299 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:05.299 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:05.299 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:05.299 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:05.558 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.558 22:58:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.816 nvme0n1 00:35:06.075 22:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:06.075 22:58:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:06.075 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:06.075 Zero copy mechanism will not be used. 00:35:06.075 Running I/O for 2 seconds... 00:35:08.384 5520.00 IOPS, 690.00 MiB/s [2024-10-11T20:58:11.652Z] 6228.50 IOPS, 778.56 MiB/s 00:35:08.384 Latency(us) 00:35:08.384 [2024-10-11T20:58:11.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.384 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:08.384 nvme0n1 : 2.00 6226.20 778.27 0.00 0.00 2563.41 1881.13 9369.22 00:35:08.384 [2024-10-11T20:58:11.652Z] =================================================================================================================== 00:35:08.384 [2024-10-11T20:58:11.652Z] Total : 6226.20 778.27 0.00 0.00 2563.41 1881.13 9369.22 00:35:08.384 { 00:35:08.384 "results": [ 00:35:08.384 { 00:35:08.384 "job": "nvme0n1", 00:35:08.384 "core_mask": "0x2", 00:35:08.384 "workload": "randwrite", 00:35:08.384 "status": "finished", 00:35:08.384 "queue_depth": 16, 00:35:08.384 "io_size": 131072, 00:35:08.384 "runtime": 2.003148, 00:35:08.384 "iops": 6226.199961260975, 00:35:08.384 "mibps": 778.2749951576219, 00:35:08.384 "io_failed": 0, 00:35:08.384 "io_timeout": 0, 00:35:08.384 "avg_latency_us": 2563.4053053951966, 00:35:08.384 "min_latency_us": 1881.125925925926, 00:35:08.384 "max_latency_us": 9369.22074074074 00:35:08.384 } 00:35:08.384 ], 00:35:08.384 "core_count": 1 00:35:08.384 } 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:08.384 | select(.opcode=="crc32c") 00:35:08.384 | "\(.module_name) \(.executed)"' 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 392172 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 392172 ']' 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 392172 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:08.384 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 392172 00:35:08.385 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:08.385 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:08.385 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 392172' 00:35:08.385 killing process with pid 392172 00:35:08.385 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 392172 00:35:08.385 Received shutdown signal, test time was about 2.000000 seconds 00:35:08.385 00:35:08.385 Latency(us) 00:35:08.385 [2024-10-11T20:58:11.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.385 [2024-10-11T20:58:11.653Z] =================================================================================================================== 00:35:08.385 [2024-10-11T20:58:11.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:08.385 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 392172 00:35:08.643 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 390810 00:35:08.643 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 390810 ']' 00:35:08.643 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 390810 00:35:08.643 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:08.643 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:08.643 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 390810 00:35:08.643 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:08.643 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:08.643 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 390810' 00:35:08.643 killing process with pid 390810 00:35:08.643 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 390810 00:35:08.643 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 390810 00:35:08.901 00:35:08.901 real 0m15.695s 00:35:08.901 user 0m31.511s 00:35:08.901 sys 0m4.316s 00:35:08.901 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:08.901 22:58:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.901 ************************************ 00:35:08.901 END TEST nvmf_digest_clean 00:35:08.901 ************************************ 00:35:08.901 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:08.901 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:08.901 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:08.901 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:08.901 ************************************ 00:35:08.901 START TEST nvmf_digest_error 00:35:08.901 ************************************ 00:35:08.901 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:35:08.901 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:08.902 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:08.902 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:08.902 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.902 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=392726 00:35:08.902 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:08.902 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 392726 00:35:08.902 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 392726 ']' 00:35:08.902 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.902 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:08.902 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.902 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:08.902 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.902 [2024-10-11 22:58:12.086303] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:08.902 [2024-10-11 22:58:12.086421] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:08.902 [2024-10-11 22:58:12.150669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.160 [2024-10-11 22:58:12.194313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:09.160 [2024-10-11 22:58:12.194383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:09.160 [2024-10-11 22:58:12.194405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:09.160 [2024-10-11 22:58:12.194416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:09.160 [2024-10-11 22:58:12.194425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:09.160 [2024-10-11 22:58:12.195006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:09.160 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:09.160 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:09.161 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:09.161 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:09.161 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.161 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:09.161 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:09.161 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.161 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.161 [2024-10-11 22:58:12.335712] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:09.161 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.161 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:09.161 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:09.161 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.161 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.419 null0 00:35:09.419 [2024-10-11 22:58:12.451612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:09.419 [2024-10-11 22:58:12.475826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=392747 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 392747 /var/tmp/bperf.sock 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 392747 ']' 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:09.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:09.419 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.419 [2024-10-11 22:58:12.522294] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:09.419 [2024-10-11 22:58:12.522360] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392747 ] 00:35:09.419 [2024-10-11 22:58:12.580529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.419 [2024-10-11 22:58:12.625309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:09.677 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:09.677 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:09.677 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:09.677 22:58:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:09.936 22:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:09.936 22:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.936 22:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.936 22:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.936 22:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.936 22:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:10.194 nvme0n1 00:35:10.194 22:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:10.194 22:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.194 22:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:10.194 22:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.194 22:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:10.194 22:58:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:10.452 Running I/O for 2 seconds... 00:35:10.452 [2024-10-11 22:58:13.553031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.452 [2024-10-11 22:58:13.553091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.452 [2024-10-11 22:58:13.553129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.452 [2024-10-11 22:58:13.568158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.452 [2024-10-11 22:58:13.568192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.452 [2024-10-11 22:58:13.568219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.452 [2024-10-11 22:58:13.580159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.452 [2024-10-11 22:58:13.580189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.453 [2024-10-11 22:58:13.580222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.453 [2024-10-11 22:58:13.594015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.453 [2024-10-11 22:58:13.594060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.453 [2024-10-11 22:58:13.594077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.453 [2024-10-11 22:58:13.606932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.453 [2024-10-11 22:58:13.606962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.453 [2024-10-11 22:58:13.606993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.453 [2024-10-11 22:58:13.623456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.453 [2024-10-11 22:58:13.623484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.453 [2024-10-11 22:58:13.623505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.453 [2024-10-11 22:58:13.638448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.453 [2024-10-11 22:58:13.638481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.453 [2024-10-11 22:58:13.638498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.453 [2024-10-11 22:58:13.654204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.453 [2024-10-11 22:58:13.654235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.453 [2024-10-11 22:58:13.654252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.453 [2024-10-11 22:58:13.665330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.453 [2024-10-11 22:58:13.665359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.453 [2024-10-11 22:58:13.665376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.453 [2024-10-11 22:58:13.678514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.453 [2024-10-11 22:58:13.678566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.453 [2024-10-11 22:58:13.678584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.453 [2024-10-11 22:58:13.693767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.453 [2024-10-11 22:58:13.693799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.453 [2024-10-11 22:58:13.693837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.453 [2024-10-11 22:58:13.707644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.453 [2024-10-11 22:58:13.707684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.453 [2024-10-11 22:58:13.707703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.453 [2024-10-11 22:58:13.718670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.453 [2024-10-11 22:58:13.718700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.453 [2024-10-11 22:58:13.718733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.711 [2024-10-11 22:58:13.732618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.711 [2024-10-11 22:58:13.732647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.711 [2024-10-11 22:58:13.732667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.711 [2024-10-11 22:58:13.747461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.711 [2024-10-11 22:58:13.747491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.711 [2024-10-11 22:58:13.747508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.711 [2024-10-11 22:58:13.759001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.711 [2024-10-11 22:58:13.759028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.711 [2024-10-11 22:58:13.759048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.711 [2024-10-11 22:58:13.771737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.711 [2024-10-11 22:58:13.771768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.711 [2024-10-11 22:58:13.771785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.711 [2024-10-11 22:58:13.787443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.711 [2024-10-11 22:58:13.787471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.711 [2024-10-11 22:58:13.787489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.711 [2024-10-11 22:58:13.798434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.711 [2024-10-11 22:58:13.798462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.711 [2024-10-11 22:58:13.798481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.711 [2024-10-11 22:58:13.812831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.711 [2024-10-11 22:58:13.812880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.711 [2024-10-11 22:58:13.812896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.711 [2024-10-11 22:58:13.827454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.711 [2024-10-11 22:58:13.827483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.712 [2024-10-11 22:58:13.827502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.712 [2024-10-11 22:58:13.842159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.712 [2024-10-11 22:58:13.842204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.712 [2024-10-11 22:58:13.842220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.712 [2024-10-11 22:58:13.856159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.712 [2024-10-11 22:58:13.856190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.712 [2024-10-11 22:58:13.856228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.712 [2024-10-11 22:58:13.867913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.712 [2024-10-11 22:58:13.867944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.712 [2024-10-11 22:58:13.867976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.712 [2024-10-11 22:58:13.884809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.712 [2024-10-11 22:58:13.884843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.712 [2024-10-11 22:58:13.884860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.712 [2024-10-11 22:58:13.899451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.712 [2024-10-11 22:58:13.899497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.712 [2024-10-11 22:58:13.899516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.712 [2024-10-11 22:58:13.912817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.712 [2024-10-11 22:58:13.912876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.712 [2024-10-11 22:58:13.912894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.712 [2024-10-11 22:58:13.928825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.712 [2024-10-11 22:58:13.928867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.712 [2024-10-11 22:58:13.928898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.712 [2024-10-11 22:58:13.943202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.712 [2024-10-11 22:58:13.943235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.712 [2024-10-11 22:58:13.943265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.712 [2024-10-11 22:58:13.955756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.712 [2024-10-11 22:58:13.955788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.712 [2024-10-11 22:58:13.955805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.712 [2024-10-11 22:58:13.970276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.712 [2024-10-11 22:58:13.970305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.712 [2024-10-11 22:58:13.970322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.970 [2024-10-11 22:58:13.985429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:13.985459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:13.985475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.000923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.000950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.000969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.015764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.015797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.015815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.027319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.027349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.027376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.040947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.040990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.041007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.055045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.055075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.055095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.066685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.066718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.066736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.083079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.083108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.083130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.098674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.098706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.098725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.114678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.114710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.114729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.126222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.126250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.126270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.141098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.141127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.141142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.155135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.155180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.155199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.166406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.166434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.166457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.182436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.182464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.182491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.197435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.197465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.197495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.212201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.212232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.212264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.971 [2024-10-11 22:58:14.227828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:10.971 [2024-10-11 22:58:14.227859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.971 [2024-10-11 22:58:14.227891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.242093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.242140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.242159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.253455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.253483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.253502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.267017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.267045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.267061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.282022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.282050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.282065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.296962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.296993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.297009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.307760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.307796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.307813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.322994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.323026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.323044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.337749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.337780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.337814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.353425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.353456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.353473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.365215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.365246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.365264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.379388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.379417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.379433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.390500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.390528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.390568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.229 [2024-10-11 22:58:14.405566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.229 [2024-10-11 22:58:14.405596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.229 [2024-10-11 22:58:14.405613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.230 [2024-10-11 22:58:14.420018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.230 [2024-10-11 22:58:14.420048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.230 [2024-10-11 22:58:14.420063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.230 [2024-10-11 22:58:14.435805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.230 [2024-10-11 22:58:14.435834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.230 [2024-10-11 22:58:14.435865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.230 [2024-10-11 22:58:14.451962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.230 [2024-10-11 22:58:14.451994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.230 [2024-10-11 22:58:14.452012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.230 [2024-10-11 22:58:14.466733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.230 [2024-10-11 22:58:14.466762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.230 [2024-10-11 22:58:14.466778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.230 [2024-10-11 22:58:14.482753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.230 [2024-10-11 22:58:14.482782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.230 [2024-10-11 22:58:14.482798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.498350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.498382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.498400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.509638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.509668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.509684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.522825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.522857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.522875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.535095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.535124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.535140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 18106.00 IOPS, 70.73 MiB/s [2024-10-11T20:58:14.756Z] [2024-10-11 22:58:14.549258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.549289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.549311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.562523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.562579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.562612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.577201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.577247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.577264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.590425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.590453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.590469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.603168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.603199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.603215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.615544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.615599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.615616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.629182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.629227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.629243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.642958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.642986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.643001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.656591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.656623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.656641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.668752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.668782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.668799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.681432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.681460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.681475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.696959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.696989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.697005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.711072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.711104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.711121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.726950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.726978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.726994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.737318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.737347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.737362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.488 [2024-10-11 22:58:14.754545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.488 [2024-10-11 22:58:14.754586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.488 [2024-10-11 22:58:14.754604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.765532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.765581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.765599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.779849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.779893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.779914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.792617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.792645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.792661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.806034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.806063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.806078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.818922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.818953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.818970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.832680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.832712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.832729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.844100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.844127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.844142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.861227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.861255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.861271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.875495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.875526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.875570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.887473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.887501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.887516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.900453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.900486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.900502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.915592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.915625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.915642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.930805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.930852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.930869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.946195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.946227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.946245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.961631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.961663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.961681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.973331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.973362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.973378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:14.989452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:14.989481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:14.989497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.747 [2024-10-11 22:58:15.004340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:11.747 [2024-10-11 22:58:15.004372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.747 [2024-10-11 22:58:15.004388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.019802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.019835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.019852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.036025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.036057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.036074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.047388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.047418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.047435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.061081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.061124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.061140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.073422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.073453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.073468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.085898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.085927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.085942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.098602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.098632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.098663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.113369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.113399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.113416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.126226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.126257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.126273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.140609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.140641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.140664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.152542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.152579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.152596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.166716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.166747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.166764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.181458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.181503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.181521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.197048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.197078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.197095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.210654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.210691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.210709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.222106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.222145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.222161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.237403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.237448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.237463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.251155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.251187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.006 [2024-10-11 22:58:15.251204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.006 [2024-10-11 22:58:15.267287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.006 [2024-10-11 22:58:15.267325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.007 [2024-10-11 22:58:15.267343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.265 [2024-10-11 22:58:15.278266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.265 [2024-10-11 22:58:15.278295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.265 [2024-10-11 22:58:15.278311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.265 [2024-10-11 22:58:15.293160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.265 [2024-10-11 22:58:15.293190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.265 [2024-10-11 22:58:15.293206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.265 [2024-10-11 22:58:15.309814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.265 [2024-10-11 22:58:15.309863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.265 [2024-10-11 22:58:15.309879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.265 [2024-10-11 22:58:15.324955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.265 [2024-10-11 22:58:15.324986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.265 [2024-10-11 22:58:15.325009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.265 [2024-10-11 22:58:15.341136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.265 [2024-10-11 22:58:15.341168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.265 [2024-10-11 22:58:15.341186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.356926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.356955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.356970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.372559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.372590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.372607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.386488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.386519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.386536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.398253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.398282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.398298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.413705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.413735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.413750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.426781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.426812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.426843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.439923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.439954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.439971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.455234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.455263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.455294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.467050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.467081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.467113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.481409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.481439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.481454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.496648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.496679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.496696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.511758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.511800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.511819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.266 [2024-10-11 22:58:15.523193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.266 [2024-10-11 22:58:15.523222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.266 [2024-10-11 22:58:15.523237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.524 [2024-10-11 22:58:15.537583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x128a020) 00:35:12.524 [2024-10-11 22:58:15.537615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.524 [2024-10-11 22:58:15.537632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.524 18239.00 IOPS, 71.25 MiB/s 00:35:12.524 Latency(us) 00:35:12.524 [2024-10-11T20:58:15.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:12.524 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:12.524 nvme0n1 : 2.05 17880.20 69.84 0.00 0.00 7012.53 3495.25 55535.69 00:35:12.524 [2024-10-11T20:58:15.792Z] =================================================================================================================== 00:35:12.524 [2024-10-11T20:58:15.792Z] Total : 17880.20 69.84 0.00 0.00 7012.53 3495.25 55535.69 00:35:12.524 { 00:35:12.524 "results": [ 00:35:12.524 { 00:35:12.524 "job": "nvme0n1", 00:35:12.524 "core_mask": "0x2", 00:35:12.524 "workload": "randread", 00:35:12.524 "status": "finished", 00:35:12.524 "queue_depth": 128, 00:35:12.524 "io_size": 4096, 00:35:12.524 "runtime": 2.047293, 00:35:12.524 "iops": 17880.19594654991, 00:35:12.524 "mibps": 69.84451541621058, 00:35:12.524 "io_failed": 0, 00:35:12.524 "io_timeout": 0, 00:35:12.524 "avg_latency_us": 7012.530433990784, 00:35:12.524 "min_latency_us": 3495.2533333333336, 00:35:12.524 "max_latency_us": 55535.69185185185 00:35:12.524 } 00:35:12.524 ], 00:35:12.524 "core_count": 1 00:35:12.524 } 00:35:12.524 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:12.524 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:12.524 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:12.524 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:12.524 | .driver_specific 00:35:12.524 | .nvme_error 00:35:12.524 | .status_code 00:35:12.524 | .command_transient_transport_error' 00:35:12.782 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:35:12.782 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 392747 00:35:12.783 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 392747 ']' 00:35:12.783 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 392747 00:35:12.783 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:12.783 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:12.783 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 392747 00:35:12.783 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:12.783 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:12.783 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 392747' 00:35:12.783 killing process with pid 392747 00:35:12.783 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 392747 00:35:12.783 Received shutdown signal, test time was about 2.000000 seconds 00:35:12.783 00:35:12.783 Latency(us) 00:35:12.783 [2024-10-11T20:58:16.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:12.783 [2024-10-11T20:58:16.051Z] =================================================================================================================== 00:35:12.783 [2024-10-11T20:58:16.051Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:12.783 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 392747 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=393157 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 393157 /var/tmp/bperf.sock 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 393157 ']' 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:13.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:13.040 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:13.040 [2024-10-11 22:58:16.165088] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:13.040 [2024-10-11 22:58:16.165184] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393157 ] 00:35:13.040 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:13.040 Zero copy mechanism will not be used. 00:35:13.040 [2024-10-11 22:58:16.229429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.040 [2024-10-11 22:58:16.280491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.299 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:13.299 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:13.299 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:13.299 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:13.557 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:13.557 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.557 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:13.557 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.557 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:13.557 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:14.123 nvme0n1 00:35:14.123 22:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:14.123 22:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.123 22:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.123 22:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.123 22:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:14.123 22:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:14.123 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:14.123 Zero copy mechanism will not be used. 00:35:14.123 Running I/O for 2 seconds... 00:35:14.123 [2024-10-11 22:58:17.314652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.123 [2024-10-11 22:58:17.314703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.123 [2024-10-11 22:58:17.314725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.123 [2024-10-11 22:58:17.320851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.123 [2024-10-11 22:58:17.320895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.123 [2024-10-11 22:58:17.320919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.123 [2024-10-11 22:58:17.328078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.123 [2024-10-11 22:58:17.328112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.123 [2024-10-11 22:58:17.328130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.123 [2024-10-11 22:58:17.335137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.123 [2024-10-11 22:58:17.335169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.123 [2024-10-11 22:58:17.335187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.123 [2024-10-11 22:58:17.342596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.123 [2024-10-11 22:58:17.342631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.123 [2024-10-11 22:58:17.342649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.123 [2024-10-11 22:58:17.349952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.123 [2024-10-11 22:58:17.349999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.123 [2024-10-11 22:58:17.350016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.123 [2024-10-11 22:58:17.356361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.123 [2024-10-11 22:58:17.356394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.123 [2024-10-11 22:58:17.356411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.123 [2024-10-11 22:58:17.362773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.123 [2024-10-11 22:58:17.362808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.123 [2024-10-11 22:58:17.362828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.123 [2024-10-11 22:58:17.369382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.123 [2024-10-11 22:58:17.369417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.123 [2024-10-11 22:58:17.369436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.123 [2024-10-11 22:58:17.375178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.123 [2024-10-11 22:58:17.375212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.124 [2024-10-11 22:58:17.375231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.124 [2024-10-11 22:58:17.380524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.124 [2024-10-11 22:58:17.380568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.124 [2024-10-11 22:58:17.380589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.124 [2024-10-11 22:58:17.386098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.124 [2024-10-11 22:58:17.386133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.124 [2024-10-11 22:58:17.386152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.124 [2024-10-11 22:58:17.391339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.124 [2024-10-11 22:58:17.391388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.124 [2024-10-11 22:58:17.391407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.396354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.396388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.396415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.401500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.401537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.401568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.406701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.406735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.406770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.411798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.411832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.411851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.416857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.416905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.416924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.422360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.422393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.422426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.428067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.428100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.428117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.433224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.433256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.433275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.438141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.438174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.438192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.443734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.443773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.443792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.448937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.448971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.448989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.454517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.454573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.454595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.460921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.460954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.460972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.467379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.467412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.467444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.474880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.474932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.474952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.483031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.483063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.483086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.491358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.491406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.491424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.499782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.499815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.499866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.508160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.508192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.508210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.516381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.516413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.516431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.524485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.524517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.524542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.532739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.532771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.532788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.541184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.541229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.541246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.549720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.549752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.549770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-10-11 22:58:17.557758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.383 [2024-10-11 22:58:17.557790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-10-11 22:58:17.557807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.565924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.565956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.565973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.573540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.573587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.573607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.581005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.581039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.581057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.588768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.588801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.588819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.595639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.595672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.595691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.603400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.603432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.603450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.611051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.611083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.611101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.617499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.617546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.617577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.624164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.624211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.624228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.630335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.630366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.630384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.636084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.636116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.636133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.641725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.641757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.641775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.384 [2024-10-11 22:58:17.646931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.384 [2024-10-11 22:58:17.646963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-10-11 22:58:17.646996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.652154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.652186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.652203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.657416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.657448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.657467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.663759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.663791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.663809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.670633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.670664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.670681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.676993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.677025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.677042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.682465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.682496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.682518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.688090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.688120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.688154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.691729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.691768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.691794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.696471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.696502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.696519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.701809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.701841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.701859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.708335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.708381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.708399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.714276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.714309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.714326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.719903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.719951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-10-11 22:58:17.719968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.643 [2024-10-11 22:58:17.725794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.643 [2024-10-11 22:58:17.725827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.725845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.732255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.732300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.732320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.739859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.739891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.739909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.747437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.747485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.747504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.755253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.755284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.755302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.763338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.763371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.763389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.771248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.771280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.771299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.779162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.779193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.779211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.787442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.787473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.787491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.795067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.795100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.795118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.802760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.802793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.802810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.809560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.809593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.809611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.817611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.817659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.817677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.825267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.825299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.825316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.832993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.833025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.833043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.840886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.840918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.840936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.848894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.848924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.848942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.856447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.856479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.856497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.864286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.864317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.864354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.872547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.872603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.872621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.880648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.880680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.880714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.887023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.887054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.887072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.891906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.891937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.891955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.896890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.896921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.896953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.901718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.901751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.901768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.644 [2024-10-11 22:58:17.906651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.644 [2024-10-11 22:58:17.906683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-10-11 22:58:17.906701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.911715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.911763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.911785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.917277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.917310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.917327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.922522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.922577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.922599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.927423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.927454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.927472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.932621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.932653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.932671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.938294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.938326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.938344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.945349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.945380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.945398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.951338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.951369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.951387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.955337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.955373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.955393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.959634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.959666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.959690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.964620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.964649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.964666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.969516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.969547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.969577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.974899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.974931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.974949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.980833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.980866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.980884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.987214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.987260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.987278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.993102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.993133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.904 [2024-10-11 22:58:17.993151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.904 [2024-10-11 22:58:17.999429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.904 [2024-10-11 22:58:17.999475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:17.999493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.006031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.006077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.006096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.012675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.012711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.012745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.017635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.017667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.017686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.022879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.022910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.022927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.028150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.028182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.028199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.033060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.033093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.033111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.037947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.037979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.038012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.042783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.042815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.042833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.047748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.047780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.047798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.052655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.052687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.052705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.057584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.057615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.057634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.062534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.062574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.062593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.067479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.067511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.067528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.072918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.072948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.072965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.078284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.078316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.078334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.083358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.083389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.083407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.088308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.088340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.088358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.093509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.093541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.093571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.099042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.099075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.099113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.104471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.104504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.104522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.109755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.109787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.109805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.115466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.115514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.115533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.121703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.121735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.121754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.127958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.127990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.128008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.134244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.134276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.134308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.140716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.140748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.140780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.147244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.147291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.147308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.153622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.153659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.905 [2024-10-11 22:58:18.153677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.905 [2024-10-11 22:58:18.159484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.905 [2024-10-11 22:58:18.159516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.906 [2024-10-11 22:58:18.159534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.906 [2024-10-11 22:58:18.164753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.906 [2024-10-11 22:58:18.164785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.906 [2024-10-11 22:58:18.164804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.906 [2024-10-11 22:58:18.169740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:14.906 [2024-10-11 22:58:18.169773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.906 [2024-10-11 22:58:18.169791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.174736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.174783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.174801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.180412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.180444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.180463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.185642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.185675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.185694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.190665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.190710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.190728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.195681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.195728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.195745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.200744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.200775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.200793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.205811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.205843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.205861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.210943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.210975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.210993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.215970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.216001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.216019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.221232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.221278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.221296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.227045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.227077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.227110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.232520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.232561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.232582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.237935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.237966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.237984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.242941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.242973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.242996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.247967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.247999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.165 [2024-10-11 22:58:18.248016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.165 [2024-10-11 22:58:18.252956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.165 [2024-10-11 22:58:18.252988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.253006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.257945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.257992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.258010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.262949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.262981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.262999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.267850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.267883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.267917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.272912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.272944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.272963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.277946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.277978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.277996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.282974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.283006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.283024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.287919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.287949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.287967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.292918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.292949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.292967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.297855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.297901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.297919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.302819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.302867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.302885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.307783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.307814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.307832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.166 5128.00 IOPS, 641.00 MiB/s [2024-10-11T20:58:18.434Z] [2024-10-11 22:58:18.314256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.314289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.314306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.319547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.319595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.319615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.325391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.325423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.325441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.331579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.331612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.331636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.338359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.338391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.338409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.344875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.344921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.344939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.349745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.349776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.349794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.353224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.353254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.353272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.357995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.358026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.358042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.363193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.363240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.363258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.368943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.368987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.369004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.373962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.374007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.374025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.379003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.379042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.379061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.384083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.384114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.384130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.388991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.389021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.389039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.393960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.393993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.394011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.399268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.399300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.166 [2024-10-11 22:58:18.399318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.166 [2024-10-11 22:58:18.405417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.166 [2024-10-11 22:58:18.405449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.167 [2024-10-11 22:58:18.405467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.167 [2024-10-11 22:58:18.411563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.167 [2024-10-11 22:58:18.411595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.167 [2024-10-11 22:58:18.411614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.167 [2024-10-11 22:58:18.417116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.167 [2024-10-11 22:58:18.417148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.167 [2024-10-11 22:58:18.417166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.167 [2024-10-11 22:58:18.423101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.167 [2024-10-11 22:58:18.423140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.167 [2024-10-11 22:58:18.423157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.167 [2024-10-11 22:58:18.428272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.167 [2024-10-11 22:58:18.428306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.167 [2024-10-11 22:58:18.428325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.426 [2024-10-11 22:58:18.433340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.426 [2024-10-11 22:58:18.433373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.426 [2024-10-11 22:58:18.433391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.426 [2024-10-11 22:58:18.438537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.426 [2024-10-11 22:58:18.438591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.426 [2024-10-11 22:58:18.438611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.426 [2024-10-11 22:58:18.443703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.426 [2024-10-11 22:58:18.443735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.426 [2024-10-11 22:58:18.443753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.426 [2024-10-11 22:58:18.449316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.426 [2024-10-11 22:58:18.449348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.426 [2024-10-11 22:58:18.449366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.426 [2024-10-11 22:58:18.452738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.426 [2024-10-11 22:58:18.452769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.452785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.458444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.458475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.458492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.464119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.464151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.464168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.469761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.469793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.469816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.475471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.475501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.475518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.480961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.481007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.481025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.486805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.486837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.486862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.492536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.492603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.492622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.498338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.498370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.498402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.504050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.504082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.504099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.509824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.509880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.509897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.515409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.515455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.515472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.520527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.520582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.520602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.526731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.526764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.526782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.534476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.534508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.534526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.541901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.541947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.541964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.550048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.550080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.550098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.557956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.557988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.558006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.565957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.566003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.566020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.571804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.571836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.571868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.576791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.576823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.576847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.581835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.581883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.581901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.586861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.586893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.586911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.591838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.591885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.591902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.596854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.596885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.596901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.601939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.601986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.602004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.606989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.607022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.607041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.612776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.612808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.612839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.618213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.427 [2024-10-11 22:58:18.618263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.427 [2024-10-11 22:58:18.618281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.427 [2024-10-11 22:58:18.623309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.428 [2024-10-11 22:58:18.623346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.428 [2024-10-11 22:58:18.623365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.428 [2024-10-11 22:58:18.628388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.428 [2024-10-11 22:58:18.628420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.428 [2024-10-11 22:58:18.628452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.428 [2024-10-11 22:58:18.634872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.428 [2024-10-11 22:58:18.634921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.428 [2024-10-11 22:58:18.634940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.428 [2024-10-11 22:58:18.640646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.428 [2024-10-11 22:58:18.640679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.428 [2024-10-11 22:58:18.640698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.428 [2024-10-11 22:58:18.647398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.428 [2024-10-11 22:58:18.647431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.428 [2024-10-11 22:58:18.647448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.428 [2024-10-11 22:58:18.655147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.428 [2024-10-11 22:58:18.655179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.428 [2024-10-11 22:58:18.655196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.428 [2024-10-11 22:58:18.661234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.428 [2024-10-11 22:58:18.661266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.428 [2024-10-11 22:58:18.661299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.428 [2024-10-11 22:58:18.667344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.428 [2024-10-11 22:58:18.667375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.428 [2024-10-11 22:58:18.667392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.428 [2024-10-11 22:58:18.672937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.428 [2024-10-11 22:58:18.672970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.428 [2024-10-11 22:58:18.672988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.428 [2024-10-11 22:58:18.679284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.428 [2024-10-11 22:58:18.679314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.428 [2024-10-11 22:58:18.679331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.428 [2024-10-11 22:58:18.686922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.428 [2024-10-11 22:58:18.686955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.428 [2024-10-11 22:58:18.686973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.428 [2024-10-11 22:58:18.693028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.428 [2024-10-11 22:58:18.693060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.428 [2024-10-11 22:58:18.693092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.687 [2024-10-11 22:58:18.699214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.687 [2024-10-11 22:58:18.699245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.687 [2024-10-11 22:58:18.699262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.687 [2024-10-11 22:58:18.704675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.687 [2024-10-11 22:58:18.704708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.687 [2024-10-11 22:58:18.704725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.687 [2024-10-11 22:58:18.710792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.687 [2024-10-11 22:58:18.710823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.687 [2024-10-11 22:58:18.710840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.687 [2024-10-11 22:58:18.717126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.687 [2024-10-11 22:58:18.717172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.687 [2024-10-11 22:58:18.717190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.687 [2024-10-11 22:58:18.723626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.687 [2024-10-11 22:58:18.723658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.687 [2024-10-11 22:58:18.723690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.687 [2024-10-11 22:58:18.730120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.687 [2024-10-11 22:58:18.730151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.687 [2024-10-11 22:58:18.730174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.687 [2024-10-11 22:58:18.735984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.687 [2024-10-11 22:58:18.736016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.687 [2024-10-11 22:58:18.736034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.687 [2024-10-11 22:58:18.741014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.687 [2024-10-11 22:58:18.741045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.687 [2024-10-11 22:58:18.741063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.687 [2024-10-11 22:58:18.747338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.687 [2024-10-11 22:58:18.747384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.747401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.752541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.752584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.752603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.757441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.757473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.757490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.762348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.762380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.762397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.767241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.767271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.767288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.772817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.772864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.772881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.779629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.779681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.779699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.787515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.787546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.787575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.794678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.794724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.794740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.802471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.802503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.802521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.810148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.810179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.810196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.817830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.817862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.817895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.825461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.825492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.825509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.833019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.833066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.833084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.840653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.840686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.840705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.848208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.848255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.848272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.855876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.855905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.855922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.863571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.863603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.863621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.871891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.871922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.871939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.878515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.878547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.878573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.882417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.882464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.882483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.889071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.889117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.889134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.896849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.896894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.896911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.905149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.905204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.905222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.912136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.912168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.912185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.917626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.917670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.917687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.922489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.922520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.922538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.927614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.927644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.927661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.932637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.932668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.688 [2024-10-11 22:58:18.932700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.688 [2024-10-11 22:58:18.938317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.688 [2024-10-11 22:58:18.938349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.689 [2024-10-11 22:58:18.938380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.689 [2024-10-11 22:58:18.943433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.689 [2024-10-11 22:58:18.943464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.689 [2024-10-11 22:58:18.943481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.689 [2024-10-11 22:58:18.948376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.689 [2024-10-11 22:58:18.948422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.689 [2024-10-11 22:58:18.948439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.689 [2024-10-11 22:58:18.953416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.689 [2024-10-11 22:58:18.953448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.689 [2024-10-11 22:58:18.953466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:18.958531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:18.958574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:18.958594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:18.963632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:18.963679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:18.963697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:18.968682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:18.968712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:18.968730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:18.973787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:18.973835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:18.973853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:18.978785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:18.978816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:18.978833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:18.984633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:18.984664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:18.984696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:18.989449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:18.989495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:18.989513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:18.995351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:18.995397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:18.995419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:19.000711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:19.000757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:19.000774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:19.005682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:19.005713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:19.005730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:19.010675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:19.010719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:19.010735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:19.015637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:19.015667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:19.015685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:19.020693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:19.020726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:19.020759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.948 [2024-10-11 22:58:19.026107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.948 [2024-10-11 22:58:19.026140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.948 [2024-10-11 22:58:19.026158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.031079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.031127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.031144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.036123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.036154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.036171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.041666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.041716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.041733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.048471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.048502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.048532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.056083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.056115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.056133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.062797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.062827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.062844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.069811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.069842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.069860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.075415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.075446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.075463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.080944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.080976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.080994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.086017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.086048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.086080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.090974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.091005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.091022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.096950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.096981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.096999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.104632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.104663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.104680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.111392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.111425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.111457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.118790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.118821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.118837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.126522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.126564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.126590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.132477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.132508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.132525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.138927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.138973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.138991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.145589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.145621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.145640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.152296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.152327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.152350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.159168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.159216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.159233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.165230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.165261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.165278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.170718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.170751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.170770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.175942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.175995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.176015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.182091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.182122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.182140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.187700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.187732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.187750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.192632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.192663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.192681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.197630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.197664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.197682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.949 [2024-10-11 22:58:19.202507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.949 [2024-10-11 22:58:19.202568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.949 [2024-10-11 22:58:19.202589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.950 [2024-10-11 22:58:19.207546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.950 [2024-10-11 22:58:19.207586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.950 [2024-10-11 22:58:19.207604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.950 [2024-10-11 22:58:19.212615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:15.950 [2024-10-11 22:58:19.212665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.950 [2024-10-11 22:58:19.212683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.208 [2024-10-11 22:58:19.217804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.208 [2024-10-11 22:58:19.217836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.208 [2024-10-11 22:58:19.217868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.208 [2024-10-11 22:58:19.222789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.208 [2024-10-11 22:58:19.222822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.208 [2024-10-11 22:58:19.222840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.227859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.227889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.227906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.232679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.232725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.232743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.237600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.237648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.237667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.242722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.242755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.242773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.248465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.248511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.248528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.253731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.253778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.253796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.259568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.259617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.259635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.265952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.265985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.266017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.272069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.272101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.272119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.277892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.277938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.277956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.283592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.283625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.283643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.289311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.289343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.289361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.295077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.295116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.295135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.300631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.300663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.300681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.306247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.306278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.306296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.209 [2024-10-11 22:58:19.312023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2364500) 00:35:16.209 [2024-10-11 22:58:19.312055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.209 [2024-10-11 22:58:19.312073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.209 5214.50 IOPS, 651.81 MiB/s 00:35:16.209 Latency(us) 00:35:16.209 [2024-10-11T20:58:19.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.209 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:16.209 nvme0n1 : 2.00 5213.31 651.66 0.00 0.00 3064.93 837.40 8689.59 00:35:16.209 [2024-10-11T20:58:19.477Z] =================================================================================================================== 00:35:16.209 [2024-10-11T20:58:19.477Z] Total : 5213.31 651.66 0.00 0.00 3064.93 837.40 8689.59 00:35:16.209 { 00:35:16.209 "results": [ 00:35:16.209 { 00:35:16.209 "job": "nvme0n1", 00:35:16.209 "core_mask": "0x2", 00:35:16.209 "workload": "randread", 00:35:16.209 "status": "finished", 00:35:16.209 "queue_depth": 16, 00:35:16.209 "io_size": 131072, 00:35:16.209 "runtime": 2.003527, 00:35:16.209 "iops": 5213.3063342794985, 00:35:16.209 "mibps": 651.6632917849373, 00:35:16.209 "io_failed": 0, 00:35:16.209 "io_timeout": 0, 00:35:16.209 "avg_latency_us": 3064.9289661897415, 00:35:16.209 "min_latency_us": 837.4044444444445, 00:35:16.209 "max_latency_us": 8689.588148148148 00:35:16.209 } 00:35:16.209 ], 00:35:16.209 "core_count": 1 00:35:16.209 } 00:35:16.209 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:16.209 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:16.209 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:16.209 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:16.209 | .driver_specific 00:35:16.209 | .nvme_error 00:35:16.209 | .status_code 00:35:16.209 | .command_transient_transport_error' 00:35:16.468 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 336 > 0 )) 00:35:16.468 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 393157 00:35:16.468 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 393157 ']' 00:35:16.468 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 393157 00:35:16.468 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:16.468 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:16.468 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 393157 00:35:16.468 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:16.468 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:16.468 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 393157' 00:35:16.468 killing process with pid 393157 00:35:16.468 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 393157 00:35:16.468 Received shutdown signal, test time was about 2.000000 seconds 00:35:16.468 00:35:16.468 Latency(us) 00:35:16.468 [2024-10-11T20:58:19.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.468 [2024-10-11T20:58:19.736Z] =================================================================================================================== 00:35:16.468 [2024-10-11T20:58:19.736Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:16.468 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 393157 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=393689 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 393689 /var/tmp/bperf.sock 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 393689 ']' 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:16.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:16.726 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:16.726 [2024-10-11 22:58:19.898914] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:16.726 [2024-10-11 22:58:19.898999] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393689 ] 00:35:16.726 [2024-10-11 22:58:19.957628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.985 [2024-10-11 22:58:20.007031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.985 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:16.985 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:16.985 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:16.985 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:17.243 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:17.243 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.243 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:17.243 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.243 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:17.243 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:17.810 nvme0n1 00:35:17.810 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:17.810 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.810 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:17.810 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.810 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:17.810 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:17.810 Running I/O for 2 seconds... 00:35:17.810 [2024-10-11 22:58:20.950137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f6458 00:35:17.810 [2024-10-11 22:58:20.951034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.810 [2024-10-11 22:58:20.951073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.810 [2024-10-11 22:58:20.961971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166fe720 00:35:17.810 [2024-10-11 22:58:20.962710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.810 [2024-10-11 22:58:20.962754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.810 [2024-10-11 22:58:20.974753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166dece0 00:35:17.810 [2024-10-11 22:58:20.975680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.810 [2024-10-11 22:58:20.975724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.810 [2024-10-11 22:58:20.989566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166fd208 00:35:17.810 [2024-10-11 22:58:20.991000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.810 [2024-10-11 22:58:20.991044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.810 [2024-10-11 22:58:20.999638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f4b08 00:35:17.810 [2024-10-11 22:58:21.000331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.810 [2024-10-11 22:58:21.000374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.810 [2024-10-11 22:58:21.012181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f8e88 00:35:17.810 [2024-10-11 22:58:21.013063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.810 [2024-10-11 22:58:21.013092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.810 [2024-10-11 22:58:21.026663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e99d8 00:35:17.811 [2024-10-11 22:58:21.028436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.811 [2024-10-11 22:58:21.028478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.811 [2024-10-11 22:58:21.035121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f8a50 00:35:17.811 [2024-10-11 22:58:21.035885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.811 [2024-10-11 22:58:21.035928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.811 [2024-10-11 22:58:21.047608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e1b48 00:35:17.811 [2024-10-11 22:58:21.048512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.811 [2024-10-11 22:58:21.048561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.811 [2024-10-11 22:58:21.059676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166fd640 00:35:17.811 [2024-10-11 22:58:21.060960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.811 [2024-10-11 22:58:21.060987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.811 [2024-10-11 22:58:21.074006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f9f68 00:35:17.811 [2024-10-11 22:58:21.075978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.811 [2024-10-11 22:58:21.076006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.082776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166ee5c8 00:35:18.069 [2024-10-11 22:58:21.083627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.069 [2024-10-11 22:58:21.083653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.095077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e27f0 00:35:18.069 [2024-10-11 22:58:21.096221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.069 [2024-10-11 22:58:21.096264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.107780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166fda78 00:35:18.069 [2024-10-11 22:58:21.109120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.069 [2024-10-11 22:58:21.109163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.120200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e6300 00:35:18.069 [2024-10-11 22:58:21.121701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.069 [2024-10-11 22:58:21.121743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.132692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f35f0 00:35:18.069 [2024-10-11 22:58:21.134362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.069 [2024-10-11 22:58:21.134390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.141030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e6738 00:35:18.069 [2024-10-11 22:58:21.141811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.069 [2024-10-11 22:58:21.141839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.155341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166fd208 00:35:18.069 [2024-10-11 22:58:21.156558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.069 [2024-10-11 22:58:21.156599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.167701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e1f80 00:35:18.069 [2024-10-11 22:58:21.169112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.069 [2024-10-11 22:58:21.169154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.180081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166ebfd0 00:35:18.069 [2024-10-11 22:58:21.181759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.069 [2024-10-11 22:58:21.181786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.192509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.069 [2024-10-11 22:58:21.194317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.069 [2024-10-11 22:58:21.194358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.200816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f7970 00:35:18.069 [2024-10-11 22:58:21.201866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.069 [2024-10-11 22:58:21.201894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.213389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166df118 00:35:18.069 [2024-10-11 22:58:21.214505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.069 [2024-10-11 22:58:21.214546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:18.069 [2024-10-11 22:58:21.225939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166dece0 00:35:18.069 [2024-10-11 22:58:21.227142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.070 [2024-10-11 22:58:21.227184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:18.070 [2024-10-11 22:58:21.237988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e4578 00:35:18.070 [2024-10-11 22:58:21.239277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.070 [2024-10-11 22:58:21.239321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:18.070 [2024-10-11 22:58:21.251966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e2c28 00:35:18.070 [2024-10-11 22:58:21.253763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.070 [2024-10-11 22:58:21.253806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:18.070 [2024-10-11 22:58:21.260405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166fac10 00:35:18.070 [2024-10-11 22:58:21.261213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.070 [2024-10-11 22:58:21.261254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:18.070 [2024-10-11 22:58:21.272847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166eb760 00:35:18.070 [2024-10-11 22:58:21.273790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.070 [2024-10-11 22:58:21.273817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:18.070 [2024-10-11 22:58:21.284963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f31b8 00:35:18.070 [2024-10-11 22:58:21.286234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.070 [2024-10-11 22:58:21.286277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:18.070 [2024-10-11 22:58:21.297630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166eb328 00:35:18.070 [2024-10-11 22:58:21.299009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.070 [2024-10-11 22:58:21.299051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:18.070 [2024-10-11 22:58:21.310070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f6458 00:35:18.070 [2024-10-11 22:58:21.311606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.070 [2024-10-11 22:58:21.311653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:18.070 [2024-10-11 22:58:21.320499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166de038 00:35:18.070 [2024-10-11 22:58:21.322204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.070 [2024-10-11 22:58:21.322232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:18.070 [2024-10-11 22:58:21.333079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e8088 00:35:18.070 [2024-10-11 22:58:21.334180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.070 [2024-10-11 22:58:21.334208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:18.329 [2024-10-11 22:58:21.344977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f8618 00:35:18.329 [2024-10-11 22:58:21.346301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.329 [2024-10-11 22:58:21.346329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:18.329 [2024-10-11 22:58:21.356709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f6020 00:35:18.329 [2024-10-11 22:58:21.358057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.329 [2024-10-11 22:58:21.358083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:18.329 [2024-10-11 22:58:21.368649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f2948 00:35:18.329 [2024-10-11 22:58:21.369532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.329 [2024-10-11 22:58:21.369585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:18.329 [2024-10-11 22:58:21.380048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166ef270 00:35:18.329 [2024-10-11 22:58:21.381239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.329 [2024-10-11 22:58:21.381268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:18.329 [2024-10-11 22:58:21.391833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e8088 00:35:18.329 [2024-10-11 22:58:21.392816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.329 [2024-10-11 22:58:21.392843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:18.329 [2024-10-11 22:58:21.404218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166dfdc0 00:35:18.329 [2024-10-11 22:58:21.405379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.329 [2024-10-11 22:58:21.405420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:18.329 [2024-10-11 22:58:21.415951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166df988 00:35:18.329 [2024-10-11 22:58:21.417218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.329 [2024-10-11 22:58:21.417259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:18.329 [2024-10-11 22:58:21.428377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e0a68 00:35:18.329 [2024-10-11 22:58:21.429813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.329 [2024-10-11 22:58:21.429858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:18.329 [2024-10-11 22:58:21.440503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f7100 00:35:18.329 [2024-10-11 22:58:21.442048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.329 [2024-10-11 22:58:21.442076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:18.329 [2024-10-11 22:58:21.452154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166de470 00:35:18.329 [2024-10-11 22:58:21.453260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.329 [2024-10-11 22:58:21.453290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:18.329 [2024-10-11 22:58:21.466045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166ecc78 00:35:18.329 [2024-10-11 22:58:21.468004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.329 [2024-10-11 22:58:21.468046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:18.329 [2024-10-11 22:58:21.474703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166fda78 00:35:18.330 [2024-10-11 22:58:21.475553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.330 [2024-10-11 22:58:21.475595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:18.330 [2024-10-11 22:58:21.487240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f9b30 00:35:18.330 [2024-10-11 22:58:21.488226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.330 [2024-10-11 22:58:21.488266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:18.330 [2024-10-11 22:58:21.499348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e5220 00:35:18.330 [2024-10-11 22:58:21.500685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.330 [2024-10-11 22:58:21.500727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.330 [2024-10-11 22:58:21.511656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e95a0 00:35:18.330 [2024-10-11 22:58:21.513081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.330 [2024-10-11 22:58:21.513122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:18.330 [2024-10-11 22:58:21.524026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f8e88 00:35:18.330 [2024-10-11 22:58:21.525653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.330 [2024-10-11 22:58:21.525697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.330 [2024-10-11 22:58:21.534505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166fb480 00:35:18.330 [2024-10-11 22:58:21.536285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.330 [2024-10-11 22:58:21.536314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.330 [2024-10-11 22:58:21.546575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e5220 00:35:18.330 [2024-10-11 22:58:21.548012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.330 [2024-10-11 22:58:21.548039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.330 [2024-10-11 22:58:21.558521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e23b8 00:35:18.330 [2024-10-11 22:58:21.559724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.330 [2024-10-11 22:58:21.559765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.330 [2024-10-11 22:58:21.570783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e3d08 00:35:18.330 [2024-10-11 22:58:21.572234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.330 [2024-10-11 22:58:21.572276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:18.330 [2024-10-11 22:58:21.582899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e73e0 00:35:18.330 [2024-10-11 22:58:21.584358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.330 [2024-10-11 22:58:21.584400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:18.330 [2024-10-11 22:58:21.593269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166df118 00:35:18.330 [2024-10-11 22:58:21.594896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.330 [2024-10-11 22:58:21.594926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.605510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e12d8 00:35:18.589 [2024-10-11 22:58:21.606821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.606850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.617382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f4298 00:35:18.589 [2024-10-11 22:58:21.618468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.618515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.629611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f9b30 00:35:18.589 [2024-10-11 22:58:21.631053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.631080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.641262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166ef6a8 00:35:18.589 [2024-10-11 22:58:21.642275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.642318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.652354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166df118 00:35:18.589 [2024-10-11 22:58:21.654014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.654043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.664536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f9f68 00:35:18.589 [2024-10-11 22:58:21.665827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.665856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.676289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e6fa8 00:35:18.589 [2024-10-11 22:58:21.677509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.677555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.688995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166efae0 00:35:18.589 [2024-10-11 22:58:21.690430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.690472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.701547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166ec408 00:35:18.589 [2024-10-11 22:58:21.703189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.703232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.714139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166feb58 00:35:18.589 [2024-10-11 22:58:21.716019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.716061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.722768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166ed4e8 00:35:18.589 [2024-10-11 22:58:21.723666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.723693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.737137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f31b8 00:35:18.589 [2024-10-11 22:58:21.738620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.738648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.748137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166eea00 00:35:18.589 [2024-10-11 22:58:21.749355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.749384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.759718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f1868 00:35:18.589 [2024-10-11 22:58:21.760946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.760988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.772249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0350 00:35:18.589 [2024-10-11 22:58:21.773675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.773718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.784886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166ee190 00:35:18.589 [2024-10-11 22:58:21.786375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.786419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.795496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f9f68 00:35:18.589 [2024-10-11 22:58:21.797082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.797111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.805897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f1868 00:35:18.589 [2024-10-11 22:58:21.806651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.589 [2024-10-11 22:58:21.806681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:18.589 [2024-10-11 22:58:21.818545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166fe720 00:35:18.589 [2024-10-11 22:58:21.819471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.590 [2024-10-11 22:58:21.819513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:18.590 [2024-10-11 22:58:21.830754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166ed920 00:35:18.590 [2024-10-11 22:58:21.831708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.590 [2024-10-11 22:58:21.831738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:18.590 [2024-10-11 22:58:21.844970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f7538 00:35:18.590 [2024-10-11 22:58:21.846341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.590 [2024-10-11 22:58:21.846384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:18.590 [2024-10-11 22:58:21.855034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e6300 00:35:18.590 [2024-10-11 22:58:21.855677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.590 [2024-10-11 22:58:21.855705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:21.867656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f3a28 00:35:18.849 [2024-10-11 22:58:21.868437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:21.868480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:21.882335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166ed920 00:35:18.849 [2024-10-11 22:58:21.884172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:21.884202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:21.890791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166fc998 00:35:18.849 [2024-10-11 22:58:21.891719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:21.891762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:21.903100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e3060 00:35:18.849 [2024-10-11 22:58:21.904033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:21.904062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:21.917239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e84c0 00:35:18.849 [2024-10-11 22:58:21.918738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:21.918783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:21.929785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166e1b48 00:35:18.849 [2024-10-11 22:58:21.931509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:21.931569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:21.940797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.849 [2024-10-11 22:58:21.941961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:21.941991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.849 21400.00 IOPS, 83.59 MiB/s [2024-10-11T20:58:22.117Z] [2024-10-11 22:58:21.954794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.849 [2024-10-11 22:58:21.955149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:21.955175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:21.968738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.849 [2024-10-11 22:58:21.969013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:21.969056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:21.982862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.849 [2024-10-11 22:58:21.983136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:21.983178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:21.996992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.849 [2024-10-11 22:58:21.997278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:21.997322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:22.010879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.849 [2024-10-11 22:58:22.011175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:22.011218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:22.025031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.849 [2024-10-11 22:58:22.025283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:22.025324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:22.039115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.849 [2024-10-11 22:58:22.039393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:22.039435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:22.053077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.849 [2024-10-11 22:58:22.053398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.849 [2024-10-11 22:58:22.053424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.849 [2024-10-11 22:58:22.067250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.850 [2024-10-11 22:58:22.067462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.850 [2024-10-11 22:58:22.067489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.850 [2024-10-11 22:58:22.081073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.850 [2024-10-11 22:58:22.081359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.850 [2024-10-11 22:58:22.081401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.850 [2024-10-11 22:58:22.095037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.850 [2024-10-11 22:58:22.095291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.850 [2024-10-11 22:58:22.095318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.850 [2024-10-11 22:58:22.108979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:18.850 [2024-10-11 22:58:22.109272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.850 [2024-10-11 22:58:22.109299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.122903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.123149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.123190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.136809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.137083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.137109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.150759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.151063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.151105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.164775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.165045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.165086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.178680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.178962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.179003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.192516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.192801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.192828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.207251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.207569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.207600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.221109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.221410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.221454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.235068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.235319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.235361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.248881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.249106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.249133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.262684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.262955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.262997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.276495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.276797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.276841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.290155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.290347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.109 [2024-10-11 22:58:22.290380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.109 [2024-10-11 22:58:22.303804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.109 [2024-10-11 22:58:22.304102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.110 [2024-10-11 22:58:22.304129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.110 [2024-10-11 22:58:22.317381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.110 [2024-10-11 22:58:22.317631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.110 [2024-10-11 22:58:22.317659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.110 [2024-10-11 22:58:22.331142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.110 [2024-10-11 22:58:22.331459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.110 [2024-10-11 22:58:22.331501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.110 [2024-10-11 22:58:22.345005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.110 [2024-10-11 22:58:22.345216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.110 [2024-10-11 22:58:22.345242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.110 [2024-10-11 22:58:22.358754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.110 [2024-10-11 22:58:22.359074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.110 [2024-10-11 22:58:22.359115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.110 [2024-10-11 22:58:22.372577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.110 [2024-10-11 22:58:22.372793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.110 [2024-10-11 22:58:22.372821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.386164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.386477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.386519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.399942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.400251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.400293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.413795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.414078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.414104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.427464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.427804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.427848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.441301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.441605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.441647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.455109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.455433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.455461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.469003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.469313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.469340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.482473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.482813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.482858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.497192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.497411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.497438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.510677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.510942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.510985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.524304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.524582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.524610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.538113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.538415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.538456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.551891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.552159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.552186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.565703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.369 [2024-10-11 22:58:22.565982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.369 [2024-10-11 22:58:22.566024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.369 [2024-10-11 22:58:22.579522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.370 [2024-10-11 22:58:22.579805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.370 [2024-10-11 22:58:22.579832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.370 [2024-10-11 22:58:22.593241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.370 [2024-10-11 22:58:22.593579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.370 [2024-10-11 22:58:22.593606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.370 [2024-10-11 22:58:22.607050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.370 [2024-10-11 22:58:22.607265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.370 [2024-10-11 22:58:22.607292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.370 [2024-10-11 22:58:22.620644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.370 [2024-10-11 22:58:22.620902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.370 [2024-10-11 22:58:22.620928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.370 [2024-10-11 22:58:22.634278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.370 [2024-10-11 22:58:22.634525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.370 [2024-10-11 22:58:22.634559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.647911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.648167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.648194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.661523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.661800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.661842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.675250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.675556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.675599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.689177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.689391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.689418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.703071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.703310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.703336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.716844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.717134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.717177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.730512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.730799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.730842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.744185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.744436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.744462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.757826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.758094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.758135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.771594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.771846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.771894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.785250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.785487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.785514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.798953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.799258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.799299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.812544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.812846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.812888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.826295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.826615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.826658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.840006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.840261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.840303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.853783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.854065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.854106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.867535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.867764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.629 [2024-10-11 22:58:22.867791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.629 [2024-10-11 22:58:22.881114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.629 [2024-10-11 22:58:22.881352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.630 [2024-10-11 22:58:22.881393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.630 [2024-10-11 22:58:22.894722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.630 [2024-10-11 22:58:22.894936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.630 [2024-10-11 22:58:22.894964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.888 [2024-10-11 22:58:22.908495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.888 [2024-10-11 22:58:22.908868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.888 [2024-10-11 22:58:22.908896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.888 [2024-10-11 22:58:22.922062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.888 [2024-10-11 22:58:22.922309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.888 [2024-10-11 22:58:22.922350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.888 [2024-10-11 22:58:22.935172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.888 [2024-10-11 22:58:22.935477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.888 [2024-10-11 22:58:22.935517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.888 19959.50 IOPS, 77.97 MiB/s [2024-10-11T20:58:23.156Z] [2024-10-11 22:58:22.948407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e7380) with pdu=0x2000166f0bc0 00:35:19.888 [2024-10-11 22:58:22.948673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.888 [2024-10-11 22:58:22.948717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:19.888 00:35:19.888 Latency(us) 00:35:19.888 [2024-10-11T20:58:23.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.888 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:19.888 nvme0n1 : 2.01 19953.88 77.94 0.00 0.00 6400.14 2682.12 16311.18 00:35:19.888 [2024-10-11T20:58:23.156Z] =================================================================================================================== 00:35:19.888 [2024-10-11T20:58:23.156Z] Total : 19953.88 77.94 0.00 0.00 6400.14 2682.12 16311.18 00:35:19.888 { 00:35:19.888 "results": [ 00:35:19.888 { 00:35:19.888 "job": "nvme0n1", 00:35:19.888 "core_mask": "0x2", 00:35:19.888 "workload": "randwrite", 00:35:19.888 "status": "finished", 00:35:19.888 "queue_depth": 128, 00:35:19.888 "io_size": 4096, 00:35:19.888 "runtime": 2.006577, 00:35:19.888 "iops": 19953.88166016056, 00:35:19.888 "mibps": 77.94485023500219, 00:35:19.888 "io_failed": 0, 00:35:19.888 "io_timeout": 0, 00:35:19.888 "avg_latency_us": 6400.1362772038, 00:35:19.888 "min_latency_us": 2682.1214814814816, 00:35:19.888 "max_latency_us": 16311.182222222222 00:35:19.888 } 00:35:19.888 ], 00:35:19.888 "core_count": 1 00:35:19.888 } 00:35:19.888 22:58:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:19.889 22:58:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:19.889 22:58:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:19.889 22:58:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:19.889 | .driver_specific 00:35:19.889 | .nvme_error 00:35:19.889 | .status_code 00:35:19.889 | .command_transient_transport_error' 00:35:20.147 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:35:20.147 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 393689 00:35:20.147 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 393689 ']' 00:35:20.147 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 393689 00:35:20.147 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:20.147 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:20.147 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 393689 00:35:20.147 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:20.147 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:20.147 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 393689' 00:35:20.147 killing process with pid 393689 00:35:20.147 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 393689 00:35:20.147 Received shutdown signal, test time was about 2.000000 seconds 00:35:20.147 00:35:20.147 Latency(us) 00:35:20.147 [2024-10-11T20:58:23.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.147 [2024-10-11T20:58:23.415Z] =================================================================================================================== 00:35:20.147 [2024-10-11T20:58:23.415Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:20.147 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 393689 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=394089 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 394089 /var/tmp/bperf.sock 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 394089 ']' 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:20.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:20.406 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.406 [2024-10-11 22:58:23.554801] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:20.406 [2024-10-11 22:58:23.554896] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394089 ] 00:35:20.406 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:20.406 Zero copy mechanism will not be used. 00:35:20.406 [2024-10-11 22:58:23.613413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.406 [2024-10-11 22:58:23.655587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.664 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:20.664 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:20.664 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:20.664 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:20.922 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:20.922 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.922 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.922 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.922 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:20.923 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:21.489 nvme0n1 00:35:21.489 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:21.489 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.489 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.489 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.489 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:21.489 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:21.489 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:21.489 Zero copy mechanism will not be used. 00:35:21.489 Running I/O for 2 seconds... 00:35:21.489 [2024-10-11 22:58:24.659949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.489 [2024-10-11 22:58:24.660265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.489 [2024-10-11 22:58:24.660303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.489 [2024-10-11 22:58:24.665332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.489 [2024-10-11 22:58:24.665659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.489 [2024-10-11 22:58:24.665691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.489 [2024-10-11 22:58:24.670563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.670896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.670926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.675703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.676002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.676032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.680880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.681187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.681216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.686644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.686946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.686975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.693985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.694306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.694334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.699742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.700111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.700141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.705242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.705601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.705631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.711336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.711734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.711764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.718082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.718369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.718400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.724443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.724738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.724779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.730774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.731105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.731134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.737388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.737740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.737770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.743471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.743821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.743851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.748664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.748971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.748998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.490 [2024-10-11 22:58:24.753835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.490 [2024-10-11 22:58:24.754138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.490 [2024-10-11 22:58:24.754166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.749 [2024-10-11 22:58:24.760076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.749 [2024-10-11 22:58:24.760426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.749 [2024-10-11 22:58:24.760455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.749 [2024-10-11 22:58:24.766047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.766332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.766361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.772477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.772883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.772911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.778015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.778321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.778348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.784355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.784688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.784717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.790317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.790661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.790691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.796816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.797184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.797228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.804077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.804396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.804424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.810556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.810901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.810928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.815970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.816266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.816295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.820939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.821222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.821249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.826076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.826377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.826411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.831145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.831464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.831492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.836345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.836670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.836698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.841434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.841781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.841810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.846511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.846833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.846877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.851687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.852005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.852035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.856821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.857190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.857218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.861810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.862099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.862126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.867671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.868012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.868054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.874702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.875003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.875032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.881522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.881839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.881883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.886752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.887103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.887132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.892034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.892409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.892438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.897272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.897732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.897760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.902542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.902835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.902863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.907927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.908244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.908273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.914037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.914323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.914352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.920439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.920722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.920751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.750 [2024-10-11 22:58:24.926809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.750 [2024-10-11 22:58:24.927113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.750 [2024-10-11 22:58:24.927141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:24.931933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:24.932268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:24.932311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:24.937616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:24.937932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:24.937960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:24.942663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:24.942957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:24.942984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:24.947828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:24.948204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:24.948231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:24.953960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:24.954293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:24.954322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:24.960543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:24.960878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:24.960908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:24.966673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:24.966983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:24.967011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:24.971948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:24.972253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:24.972285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:24.978388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:24.978694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:24.978725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:24.983784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:24.984071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:24.984099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:24.989765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:24.990026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:24.990070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:24.995993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:24.996254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:24.996297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:25.002416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:25.002767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:25.002796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:25.009210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:25.009605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:25.009635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.751 [2024-10-11 22:58:25.016181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:21.751 [2024-10-11 22:58:25.016561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-10-11 22:58:25.016590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.010 [2024-10-11 22:58:25.022378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.010 [2024-10-11 22:58:25.022780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-10-11 22:58:25.022810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.010 [2024-10-11 22:58:25.029429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.010 [2024-10-11 22:58:25.029704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-10-11 22:58:25.029734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.010 [2024-10-11 22:58:25.035753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.010 [2024-10-11 22:58:25.036021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-10-11 22:58:25.036049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.010 [2024-10-11 22:58:25.041492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.010 [2024-10-11 22:58:25.041749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-10-11 22:58:25.041778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.010 [2024-10-11 22:58:25.046868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.010 [2024-10-11 22:58:25.047138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-10-11 22:58:25.047166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.010 [2024-10-11 22:58:25.052261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.010 [2024-10-11 22:58:25.052518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-10-11 22:58:25.052547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.010 [2024-10-11 22:58:25.058338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.010 [2024-10-11 22:58:25.058604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-10-11 22:58:25.058634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.010 [2024-10-11 22:58:25.064414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.010 [2024-10-11 22:58:25.064747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-10-11 22:58:25.064776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.010 [2024-10-11 22:58:25.071238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.010 [2024-10-11 22:58:25.071494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-10-11 22:58:25.071523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.010 [2024-10-11 22:58:25.076212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.076480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.076521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.080815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.081077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.081105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.085386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.085651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.085680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.090313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.090593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.090637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.096084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.096348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.096376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.100958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.101228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.101257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.105652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.105927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.105969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.110350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.110661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.110689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.115060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.115314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.115342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.119767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.120049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.120097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.124475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.124751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.124794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.129186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.129456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.129483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.133924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.134193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.134220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.138564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.138837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.138864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.143235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.143504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.143546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.147931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.148199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.148227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.152620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.152904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.152931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.157266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.157536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.157588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.162194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.162480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.162521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.166866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.167126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.167155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.171477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.171738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.171767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.176702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.176983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.177010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.181782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.182039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.182068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.187865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.188162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.188204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.193925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.194194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.194223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.200310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.200590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.200619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.206794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.207096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.207123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.213576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.213853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.213883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.220418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.011 [2024-10-11 22:58:25.220797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-10-11 22:58:25.220827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-10-11 22:58:25.227163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.012 [2024-10-11 22:58:25.227419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.012 [2024-10-11 22:58:25.227462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.012 [2024-10-11 22:58:25.232844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.012 [2024-10-11 22:58:25.233099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.012 [2024-10-11 22:58:25.233128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.012 [2024-10-11 22:58:25.238154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.012 [2024-10-11 22:58:25.238437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.012 [2024-10-11 22:58:25.238464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.012 [2024-10-11 22:58:25.243052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.012 [2024-10-11 22:58:25.243338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.012 [2024-10-11 22:58:25.243365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.012 [2024-10-11 22:58:25.248149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.012 [2024-10-11 22:58:25.248434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.012 [2024-10-11 22:58:25.248461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.012 [2024-10-11 22:58:25.253414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.012 [2024-10-11 22:58:25.253680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.012 [2024-10-11 22:58:25.253710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.012 [2024-10-11 22:58:25.258453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.012 [2024-10-11 22:58:25.258719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.012 [2024-10-11 22:58:25.258756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.012 [2024-10-11 22:58:25.263156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.012 [2024-10-11 22:58:25.263467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.012 [2024-10-11 22:58:25.263495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.012 [2024-10-11 22:58:25.268215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.012 [2024-10-11 22:58:25.268480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.012 [2024-10-11 22:58:25.268523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.012 [2024-10-11 22:58:25.273542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.012 [2024-10-11 22:58:25.273809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.012 [2024-10-11 22:58:25.273838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.271 [2024-10-11 22:58:25.278736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.271 [2024-10-11 22:58:25.279019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-10-11 22:58:25.279046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.271 [2024-10-11 22:58:25.284744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.271 [2024-10-11 22:58:25.285071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-10-11 22:58:25.285116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-10-11 22:58:25.290822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.271 [2024-10-11 22:58:25.291089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-10-11 22:58:25.291118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.271 [2024-10-11 22:58:25.295847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.271 [2024-10-11 22:58:25.296121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-10-11 22:58:25.296149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.271 [2024-10-11 22:58:25.300733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.271 [2024-10-11 22:58:25.300960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-10-11 22:58:25.300989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.305479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.305721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.305750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.310257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.310482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.310525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.316019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.316310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.316354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.322077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.322350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.322379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.328338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.328577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.328605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.333382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.333618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.333648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.338310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.338534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.338573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.342756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.342981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.343009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.347226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.347464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.347507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.351666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.351882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.351910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.356036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.356249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.356277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.360238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.360453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.360482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.364466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.364686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.364715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.368729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.368944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.368972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.372916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.373116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.373145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.377156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.377357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.377386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.381418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.381625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.381654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.385663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.385862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.385896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.390015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.390211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.390240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.394638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.394838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.394866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.399270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.399469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.399497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.403857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.404054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.404082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.408578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.408778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.408807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.413185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.413387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.413415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.417737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.417938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.417967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.422642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.422859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.422889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.427610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.427815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.427843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.432166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.272 [2024-10-11 22:58:25.432364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.272 [2024-10-11 22:58:25.432392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.272 [2024-10-11 22:58:25.436723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.436922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.436951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.441199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.441398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.441426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.445773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.445971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.445999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.450449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.450655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.450683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.454917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.455116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.455145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.459390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.459597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.459626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.464004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.464205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.464234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.468496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.468704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.468734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.472889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.473086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.473114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.477370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.477574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.477603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.481788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.481985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.482014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.486359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.486562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.486591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.490908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.491109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.491138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.495401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.495607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.495644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.499766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.499963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.499991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.504397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.504603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.504637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.508967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.509168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.509197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.513491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.513693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.513722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.518134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.518333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.518361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.522768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.522975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.523004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.527369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.527577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.527606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.531942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.532140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.532169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.273 [2024-10-11 22:58:25.536362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.273 [2024-10-11 22:58:25.536569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.273 [2024-10-11 22:58:25.536598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.540946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.541145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.541174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.545433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.545643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.545672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.549938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.550138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.550166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.554590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.554789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.554817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.559693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.559949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.559978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.564540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.564750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.564779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.569046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.569243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.569272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.573590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.573788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.573816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.577981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.578180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.578209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.582615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.582813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.582842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.587051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.587251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.587280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.591483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.591687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.591716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.596042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.596240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.596269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.600558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.600757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.600786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.605003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.605203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.533 [2024-10-11 22:58:25.605231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.533 [2024-10-11 22:58:25.609574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.533 [2024-10-11 22:58:25.609772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.609800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.614205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.614403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.614431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.618996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.619197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.619225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.623521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.623729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.623767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.628589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.628786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.628815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.634408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.634632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.634662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.639659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.639955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.639983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.644988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.645288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.645317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.650213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.650492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.650520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.534 5890.00 IOPS, 736.25 MiB/s [2024-10-11T20:58:25.802Z] [2024-10-11 22:58:25.656704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.656889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.656919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.660969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.661152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.661179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.665374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.665565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.665595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.669728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.669915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.669943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.674108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.674295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.674324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.678399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.678594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.678624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.682714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.682897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.682925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.686982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.687166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.687194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.691280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.691462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.691490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.695947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.696235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.696264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.701009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.701311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.701339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.706090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.706394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.706428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.711648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.711906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.711934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.717128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.717371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.717401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.721569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.721767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.721796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.725970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.726155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.726183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.730435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.730654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.730683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.734867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.735053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.735082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.739375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.739566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.739595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.743923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.744109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.534 [2024-10-11 22:58:25.744138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.534 [2024-10-11 22:58:25.748368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.534 [2024-10-11 22:58:25.748570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.535 [2024-10-11 22:58:25.748605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.535 [2024-10-11 22:58:25.753227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.535 [2024-10-11 22:58:25.753443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.535 [2024-10-11 22:58:25.753471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.535 [2024-10-11 22:58:25.758486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.535 [2024-10-11 22:58:25.758727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.535 [2024-10-11 22:58:25.758756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.535 [2024-10-11 22:58:25.763658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.535 [2024-10-11 22:58:25.763918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.535 [2024-10-11 22:58:25.763947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.535 [2024-10-11 22:58:25.768781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.535 [2024-10-11 22:58:25.769019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.535 [2024-10-11 22:58:25.769047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.535 [2024-10-11 22:58:25.774535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.535 [2024-10-11 22:58:25.774750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.535 [2024-10-11 22:58:25.774779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.535 [2024-10-11 22:58:25.779305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.535 [2024-10-11 22:58:25.779496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.535 [2024-10-11 22:58:25.779525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.535 [2024-10-11 22:58:25.784452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.535 [2024-10-11 22:58:25.784692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.535 [2024-10-11 22:58:25.784721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.535 [2024-10-11 22:58:25.789567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.535 [2024-10-11 22:58:25.789737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.535 [2024-10-11 22:58:25.789765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.535 [2024-10-11 22:58:25.794649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.535 [2024-10-11 22:58:25.794824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.535 [2024-10-11 22:58:25.794852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.803 [2024-10-11 22:58:25.801010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.803 [2024-10-11 22:58:25.801171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.803 [2024-10-11 22:58:25.801199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.803 [2024-10-11 22:58:25.805481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.803 [2024-10-11 22:58:25.805585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.803 [2024-10-11 22:58:25.805611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.803 [2024-10-11 22:58:25.809827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.803 [2024-10-11 22:58:25.809953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.803 [2024-10-11 22:58:25.809982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.803 [2024-10-11 22:58:25.814194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.803 [2024-10-11 22:58:25.814322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.803 [2024-10-11 22:58:25.814350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.803 [2024-10-11 22:58:25.818624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.803 [2024-10-11 22:58:25.818742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.803 [2024-10-11 22:58:25.818769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.803 [2024-10-11 22:58:25.823063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.803 [2024-10-11 22:58:25.823183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.803 [2024-10-11 22:58:25.823211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.803 [2024-10-11 22:58:25.827476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.803 [2024-10-11 22:58:25.827597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.803 [2024-10-11 22:58:25.827625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.803 [2024-10-11 22:58:25.831995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.803 [2024-10-11 22:58:25.832103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.832136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.836418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.836528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.836563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.840891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.840964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.840991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.845345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.845466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.845494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.849677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.849768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.849796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.854125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.854293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.854321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.859327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.859522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.859557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.864499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.864666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.864695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.870320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.870447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.870474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.875218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.875341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.875369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.879443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.879580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.879609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.883788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.883904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.883933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.889031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.889108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.889134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.894037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.894128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.894156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.898459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.898573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.898602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.902985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.903081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.903109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.907433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.907536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.907574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.912025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.912150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.912179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.916523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.916634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.916662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.920905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.921011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.921037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.925336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.925449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.925477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.929786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.929925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.929953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.804 [2024-10-11 22:58:25.934284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.804 [2024-10-11 22:58:25.934413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.804 [2024-10-11 22:58:25.934441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.938587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.938717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.938744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.943004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.943180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.943208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.947330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.947414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.947440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.951586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.951786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.951819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.956696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.956838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.956866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.961779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.961934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.961962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.967265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.967386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.967414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.973085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.973191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.973220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.978842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.978977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.979005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.983724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.983797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.983825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.987928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.988016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.988045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.992192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.992266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.992292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:25.996401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:25.996495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:25.996521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:26.000652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:26.000739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:26.000765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:26.004792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:26.004864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:26.004891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:26.008916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:26.009001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:26.009029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:26.013028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:26.013108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:26.013135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:26.017195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:26.017281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:26.017308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:26.021385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:26.021463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:26.021491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:26.025507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:26.025584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:26.025611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:26.029638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:26.029730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:26.029756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:26.033775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:26.033855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:26.033883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:26.037851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:26.037933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.805 [2024-10-11 22:58:26.037960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.805 [2024-10-11 22:58:26.041947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.805 [2024-10-11 22:58:26.042040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.806 [2024-10-11 22:58:26.042067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.806 [2024-10-11 22:58:26.046066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.806 [2024-10-11 22:58:26.046147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.806 [2024-10-11 22:58:26.046173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.806 [2024-10-11 22:58:26.050181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.806 [2024-10-11 22:58:26.050259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.806 [2024-10-11 22:58:26.050286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.806 [2024-10-11 22:58:26.054319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.806 [2024-10-11 22:58:26.054399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.806 [2024-10-11 22:58:26.054425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.806 [2024-10-11 22:58:26.058421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.806 [2024-10-11 22:58:26.058500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.806 [2024-10-11 22:58:26.058527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.806 [2024-10-11 22:58:26.062544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:22.806 [2024-10-11 22:58:26.062631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.806 [2024-10-11 22:58:26.062658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.066665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.066745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.066779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.070794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.070887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.070914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.074933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.075002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.075030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.079053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.079122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.079150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.083182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.083274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.083301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.087300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.087374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.087401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.091423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.091507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.091534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.095545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.095637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.095665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.099679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.099752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.099780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.103816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.103902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.103929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.107940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.108038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.108065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.112078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.112155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.112182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.116186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.116266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.116294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.120383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.120455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.120484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.124592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.124670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.124697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.128896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.129026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.129054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.133907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.067 [2024-10-11 22:58:26.134042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.067 [2024-10-11 22:58:26.134070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.067 [2024-10-11 22:58:26.138907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.139007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.139035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.144766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.144842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.144870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.150312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.150490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.150518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.156462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.156545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.156582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.161868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.161970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.161998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.166337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.166423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.166451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.170471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.170564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.170592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.174663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.174731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.174759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.178997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.179074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.179101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.183403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.183471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.183504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.187720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.187807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.187834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.192200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.192275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.192302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.196513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.196614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.196642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.200864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.200935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.200962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.205446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.205518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.205546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.209916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.210002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.210030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.214309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.214420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.214448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.218723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.218795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.218823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.223050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.223131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.223159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.227380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.227450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.227477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.231749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.231820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.231848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.236243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.236314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.236342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.240603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.240672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.240698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.244967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.245037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.245065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.249475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.249574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.249602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.253965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.254036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.254063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.258409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.258513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.258546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.262982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.068 [2024-10-11 22:58:26.263054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.068 [2024-10-11 22:58:26.263082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.068 [2024-10-11 22:58:26.267372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.267452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.267480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.271930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.272013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.272040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.276437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.276509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.276536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.280965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.281047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.281073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.286051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.286124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.286151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.290204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.290291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.290318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.294917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.295042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.295072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.300031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.300247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.300276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.305155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.305329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.305358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.310204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.310347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.310378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.315530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.315687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.315718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.321180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.321314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.321344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.325408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.325488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.325516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.329633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.329771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.329800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.069 [2024-10-11 22:58:26.333919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.069 [2024-10-11 22:58:26.334041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.069 [2024-10-11 22:58:26.334071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.338426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.338493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.338520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.342699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.342829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.342857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.347366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.347581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.347610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.352334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.352501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.352530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.357501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.357694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.357724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.363207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.363322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.363351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.369079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.369266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.369296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.375360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.375507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.375536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.381152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.381224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.381251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.386092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.386173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.386205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.390684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.390822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.390851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.394870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.394958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.394987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.399251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.328 [2024-10-11 22:58:26.399355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.328 [2024-10-11 22:58:26.399384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.328 [2024-10-11 22:58:26.404223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.404451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.404495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.409499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.409679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.409709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.414974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.415048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.415075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.420785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.420969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.420998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.426664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.426735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.426763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.431586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.431728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.431758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.435947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.436053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.436081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.440084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.440155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.440182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.444533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.444628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.444656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.449507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.449733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.449762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.454559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.454701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.454730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.460124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.460276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.460306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.465220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.465301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.465328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.469391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.469515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.469544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.473863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.473988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.474028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.478435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.478529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.478567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.482902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.482985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.483023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.487382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.487459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.487486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.491788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.491904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.491933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.496257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.496360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.496388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.500751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.500883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.500911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.504977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.505055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.505081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.509209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.509354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.509389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.514215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.514402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.514431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.519411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.519587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.519616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.525058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.525147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.525175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.530192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.530382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.530411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.535591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.535730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.329 [2024-10-11 22:58:26.535759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.329 [2024-10-11 22:58:26.540864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.329 [2024-10-11 22:58:26.540973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.330 [2024-10-11 22:58:26.541003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.330 [2024-10-11 22:58:26.546084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.330 [2024-10-11 22:58:26.546212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.330 [2024-10-11 22:58:26.546240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.330 [2024-10-11 22:58:26.550384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.330 [2024-10-11 22:58:26.550508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.330 [2024-10-11 22:58:26.550547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.330 [2024-10-11 22:58:26.555358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.330 [2024-10-11 22:58:26.555510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.330 [2024-10-11 22:58:26.555560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.330 [2024-10-11 22:58:26.560969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.330 [2024-10-11 22:58:26.561099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.330 [2024-10-11 22:58:26.561128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.330 [2024-10-11 22:58:26.566235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.330 [2024-10-11 22:58:26.566374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.330 [2024-10-11 22:58:26.566402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.330 [2024-10-11 22:58:26.570430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.330 [2024-10-11 22:58:26.570516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.330 [2024-10-11 22:58:26.570546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.330 [2024-10-11 22:58:26.574775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.330 [2024-10-11 22:58:26.574887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.330 [2024-10-11 22:58:26.574914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.330 [2024-10-11 22:58:26.579069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.330 [2024-10-11 22:58:26.579207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.330 [2024-10-11 22:58:26.579237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.330 [2024-10-11 22:58:26.583439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.330 [2024-10-11 22:58:26.583558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.330 [2024-10-11 22:58:26.583586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.330 [2024-10-11 22:58:26.587781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.330 [2024-10-11 22:58:26.587891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.330 [2024-10-11 22:58:26.587920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.330 [2024-10-11 22:58:26.592174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.330 [2024-10-11 22:58:26.592312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.330 [2024-10-11 22:58:26.592339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.589 [2024-10-11 22:58:26.596694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.589 [2024-10-11 22:58:26.596847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.589 [2024-10-11 22:58:26.596877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.589 [2024-10-11 22:58:26.600994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.589 [2024-10-11 22:58:26.601140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.589 [2024-10-11 22:58:26.601169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.589 [2024-10-11 22:58:26.605325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.589 [2024-10-11 22:58:26.605442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.589 [2024-10-11 22:58:26.605471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.589 [2024-10-11 22:58:26.609590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.589 [2024-10-11 22:58:26.609701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.589 [2024-10-11 22:58:26.609728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.589 [2024-10-11 22:58:26.613952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.589 [2024-10-11 22:58:26.614073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.589 [2024-10-11 22:58:26.614103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.589 [2024-10-11 22:58:26.618171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.589 [2024-10-11 22:58:26.618282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.589 [2024-10-11 22:58:26.618311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.589 [2024-10-11 22:58:26.622522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.589 [2024-10-11 22:58:26.622657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.589 [2024-10-11 22:58:26.622684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.589 [2024-10-11 22:58:26.626975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.589 [2024-10-11 22:58:26.627047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.589 [2024-10-11 22:58:26.627074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.589 [2024-10-11 22:58:26.631334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.589 [2024-10-11 22:58:26.631423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.589 [2024-10-11 22:58:26.631456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.589 [2024-10-11 22:58:26.635698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.589 [2024-10-11 22:58:26.635825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.589 [2024-10-11 22:58:26.635866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.589 [2024-10-11 22:58:26.640000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.589 [2024-10-11 22:58:26.640109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.589 [2024-10-11 22:58:26.640139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.589 [2024-10-11 22:58:26.644405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.589 [2024-10-11 22:58:26.644508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.589 [2024-10-11 22:58:26.644548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.590 [2024-10-11 22:58:26.648680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.590 [2024-10-11 22:58:26.648776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.590 [2024-10-11 22:58:26.648803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.590 [2024-10-11 22:58:26.653004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e76c0) with pdu=0x2000166fef90 00:35:23.590 [2024-10-11 22:58:26.654795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.590 [2024-10-11 22:58:26.654823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.590 6293.00 IOPS, 786.62 MiB/s 00:35:23.590 Latency(us) 00:35:23.590 [2024-10-11T20:58:26.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.590 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:23.590 nvme0n1 : 2.00 6289.79 786.22 0.00 0.00 2537.08 1796.17 9514.86 00:35:23.590 [2024-10-11T20:58:26.858Z] =================================================================================================================== 00:35:23.590 [2024-10-11T20:58:26.858Z] Total : 6289.79 786.22 0.00 0.00 2537.08 1796.17 9514.86 00:35:23.590 { 00:35:23.590 "results": [ 00:35:23.590 { 00:35:23.590 "job": "nvme0n1", 00:35:23.590 "core_mask": "0x2", 00:35:23.590 "workload": "randwrite", 00:35:23.590 "status": "finished", 00:35:23.590 "queue_depth": 16, 00:35:23.590 "io_size": 131072, 00:35:23.590 "runtime": 2.00404, 00:35:23.590 "iops": 6289.7946148779465, 00:35:23.590 "mibps": 786.2243268597433, 00:35:23.590 "io_failed": 0, 00:35:23.590 "io_timeout": 0, 00:35:23.590 "avg_latency_us": 2537.0847975083375, 00:35:23.590 "min_latency_us": 1796.171851851852, 00:35:23.590 "max_latency_us": 9514.856296296297 00:35:23.590 } 00:35:23.590 ], 00:35:23.590 "core_count": 1 00:35:23.590 } 00:35:23.590 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:23.590 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:23.590 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:23.590 | .driver_specific 00:35:23.590 | .nvme_error 00:35:23.590 | .status_code 00:35:23.590 | .command_transient_transport_error' 00:35:23.590 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:23.849 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 406 > 0 )) 00:35:23.849 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 394089 00:35:23.849 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 394089 ']' 00:35:23.849 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 394089 00:35:23.849 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:23.849 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:23.849 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 394089 00:35:23.849 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:23.849 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:23.849 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 394089' 00:35:23.849 killing process with pid 394089 00:35:23.849 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 394089 00:35:23.849 Received shutdown signal, test time was about 2.000000 seconds 00:35:23.849 00:35:23.849 Latency(us) 00:35:23.849 [2024-10-11T20:58:27.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.849 [2024-10-11T20:58:27.117Z] =================================================================================================================== 00:35:23.849 [2024-10-11T20:58:27.117Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:23.849 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 394089 00:35:24.109 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 392726 00:35:24.109 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 392726 ']' 00:35:24.109 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 392726 00:35:24.109 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:24.109 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:24.109 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 392726 00:35:24.109 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:24.109 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:24.109 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 392726' 00:35:24.109 killing process with pid 392726 00:35:24.109 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 392726 00:35:24.109 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 392726 00:35:24.368 00:35:24.368 real 0m15.353s 00:35:24.368 user 0m30.844s 00:35:24.368 sys 0m4.317s 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:24.368 ************************************ 00:35:24.368 END TEST nvmf_digest_error 00:35:24.368 ************************************ 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:24.368 rmmod nvme_tcp 00:35:24.368 rmmod nvme_fabrics 00:35:24.368 rmmod nvme_keyring 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 392726 ']' 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 392726 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 392726 ']' 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 392726 00:35:24.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (392726) - No such process 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 392726 is not found' 00:35:24.368 Process with pid 392726 is not found 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.368 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.358 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:26.358 00:35:26.358 real 0m35.675s 00:35:26.358 user 1m3.274s 00:35:26.358 sys 0m10.354s 00:35:26.358 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:26.358 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:26.358 ************************************ 00:35:26.358 END TEST nvmf_digest 00:35:26.358 ************************************ 00:35:26.358 22:58:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:26.358 22:58:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:26.358 22:58:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:26.358 22:58:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:26.358 22:58:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:26.358 22:58:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:26.358 22:58:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.358 ************************************ 00:35:26.358 START TEST nvmf_bdevperf 00:35:26.358 ************************************ 00:35:26.358 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:26.358 * Looking for test storage... 00:35:26.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:26.358 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:26.359 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:35:26.359 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.618 --rc genhtml_branch_coverage=1 00:35:26.618 --rc genhtml_function_coverage=1 00:35:26.618 --rc genhtml_legend=1 00:35:26.618 --rc geninfo_all_blocks=1 00:35:26.618 --rc geninfo_unexecuted_blocks=1 00:35:26.618 00:35:26.618 ' 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.618 --rc genhtml_branch_coverage=1 00:35:26.618 --rc genhtml_function_coverage=1 00:35:26.618 --rc genhtml_legend=1 00:35:26.618 --rc geninfo_all_blocks=1 00:35:26.618 --rc geninfo_unexecuted_blocks=1 00:35:26.618 00:35:26.618 ' 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.618 --rc genhtml_branch_coverage=1 00:35:26.618 --rc genhtml_function_coverage=1 00:35:26.618 --rc genhtml_legend=1 00:35:26.618 --rc geninfo_all_blocks=1 00:35:26.618 --rc geninfo_unexecuted_blocks=1 00:35:26.618 00:35:26.618 ' 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.618 --rc genhtml_branch_coverage=1 00:35:26.618 --rc genhtml_function_coverage=1 00:35:26.618 --rc genhtml_legend=1 00:35:26.618 --rc geninfo_all_blocks=1 00:35:26.618 --rc geninfo_unexecuted_blocks=1 00:35:26.618 00:35:26.618 ' 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.618 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:26.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:26.619 22:58:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:29.151 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:29.152 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:29.152 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:29.152 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:29.152 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:29.152 22:58:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:29.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:29.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:35:29.152 00:35:29.152 --- 10.0.0.2 ping statistics --- 00:35:29.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.152 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:29.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:29.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:35:29.152 00:35:29.152 --- 10.0.0.1 ping statistics --- 00:35:29.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.152 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=396457 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 396457 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 396457 ']' 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:29.152 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.152 [2024-10-11 22:58:32.109240] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:29.152 [2024-10-11 22:58:32.109324] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:29.152 [2024-10-11 22:58:32.178463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:29.152 [2024-10-11 22:58:32.229801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:29.152 [2024-10-11 22:58:32.229867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:29.153 [2024-10-11 22:58:32.229882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:29.153 [2024-10-11 22:58:32.229908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:29.153 [2024-10-11 22:58:32.229917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:29.153 [2024-10-11 22:58:32.231438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:29.153 [2024-10-11 22:58:32.231505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:29.153 [2024-10-11 22:58:32.231509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.153 [2024-10-11 22:58:32.381874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.153 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.411 Malloc0 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.411 [2024-10-11 22:58:32.445439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:29.411 { 00:35:29.411 "params": { 00:35:29.411 "name": "Nvme$subsystem", 00:35:29.411 "trtype": "$TEST_TRANSPORT", 00:35:29.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.411 "adrfam": "ipv4", 00:35:29.411 "trsvcid": "$NVMF_PORT", 00:35:29.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.411 "hdgst": ${hdgst:-false}, 00:35:29.411 "ddgst": ${ddgst:-false} 00:35:29.411 }, 00:35:29.411 "method": "bdev_nvme_attach_controller" 00:35:29.411 } 00:35:29.411 EOF 00:35:29.411 )") 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:35:29.411 22:58:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:29.411 "params": { 00:35:29.411 "name": "Nvme1", 00:35:29.411 "trtype": "tcp", 00:35:29.411 "traddr": "10.0.0.2", 00:35:29.412 "adrfam": "ipv4", 00:35:29.412 "trsvcid": "4420", 00:35:29.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:29.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:29.412 "hdgst": false, 00:35:29.412 "ddgst": false 00:35:29.412 }, 00:35:29.412 "method": "bdev_nvme_attach_controller" 00:35:29.412 }' 00:35:29.412 [2024-10-11 22:58:32.497806] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:29.412 [2024-10-11 22:58:32.497894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396599 ] 00:35:29.412 [2024-10-11 22:58:32.558244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.412 [2024-10-11 22:58:32.606706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.670 Running I/O for 1 seconds... 00:35:30.860 8389.00 IOPS, 32.77 MiB/s 00:35:30.860 Latency(us) 00:35:30.860 [2024-10-11T20:58:34.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.860 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:30.860 Verification LBA range: start 0x0 length 0x4000 00:35:30.860 Nvme1n1 : 1.05 8146.44 31.82 0.00 0.00 15059.87 3252.53 43496.49 00:35:30.860 [2024-10-11T20:58:34.128Z] =================================================================================================================== 00:35:30.860 [2024-10-11T20:58:34.128Z] Total : 8146.44 31.82 0.00 0.00 15059.87 3252.53 43496.49 00:35:30.860 22:58:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=396742 00:35:30.860 22:58:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:30.860 22:58:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:30.860 22:58:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:30.860 22:58:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:35:30.860 22:58:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:35:30.860 22:58:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:30.860 22:58:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:30.860 { 00:35:30.860 "params": { 00:35:30.860 "name": "Nvme$subsystem", 00:35:30.861 "trtype": "$TEST_TRANSPORT", 00:35:30.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.861 "adrfam": "ipv4", 00:35:30.861 "trsvcid": "$NVMF_PORT", 00:35:30.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.861 "hdgst": ${hdgst:-false}, 00:35:30.861 "ddgst": ${ddgst:-false} 00:35:30.861 }, 00:35:30.861 "method": "bdev_nvme_attach_controller" 00:35:30.861 } 00:35:30.861 EOF 00:35:30.861 )") 00:35:30.861 22:58:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:35:30.861 22:58:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:35:30.861 22:58:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:35:30.861 22:58:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:30.861 "params": { 00:35:30.861 "name": "Nvme1", 00:35:30.861 "trtype": "tcp", 00:35:30.861 "traddr": "10.0.0.2", 00:35:30.861 "adrfam": "ipv4", 00:35:30.861 "trsvcid": "4420", 00:35:30.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:30.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:30.861 "hdgst": false, 00:35:30.861 "ddgst": false 00:35:30.861 }, 00:35:30.861 "method": "bdev_nvme_attach_controller" 00:35:30.861 }' 00:35:30.861 [2024-10-11 22:58:34.128470] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:30.861 [2024-10-11 22:58:34.128571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396742 ] 00:35:31.119 [2024-10-11 22:58:34.188189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.119 [2024-10-11 22:58:34.233810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.376 Running I/O for 15 seconds... 00:35:33.241 8535.00 IOPS, 33.34 MiB/s [2024-10-11T20:58:37.446Z] 8613.50 IOPS, 33.65 MiB/s [2024-10-11T20:58:37.446Z] 22:58:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 396457 00:35:34.178 22:58:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:34.178 [2024-10-11 22:58:37.093741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.093789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.093820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.093854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.093873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.093889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.093922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.093936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.093954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.093985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.178 [2024-10-11 22:58:37.094602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.178 [2024-10-11 22:58:37.094617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.094632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.094647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.094661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.094677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.179 [2024-10-11 22:58:37.094691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.094706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.179 [2024-10-11 22:58:37.094721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.094736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.179 [2024-10-11 22:58:37.094750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.094766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.179 [2024-10-11 22:58:37.094780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.094795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.179 [2024-10-11 22:58:37.094808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.094824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.179 [2024-10-11 22:58:37.094838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.094868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.179 [2024-10-11 22:58:37.094880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.094893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.179 [2024-10-11 22:58:37.094909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.094923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.094940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.094954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.094966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.094979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.094991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.179 [2024-10-11 22:58:37.095820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.179 [2024-10-11 22:58:37.095853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.095869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.095883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.095912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.095925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.095938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.095951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.095963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.095975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.095988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.180 [2024-10-11 22:58:37.096081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.180 [2024-10-11 22:58:37.096721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.180 [2024-10-11 22:58:37.096756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.180 [2024-10-11 22:58:37.096786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.180 [2024-10-11 22:58:37.096815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.180 [2024-10-11 22:58:37.096844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.180 [2024-10-11 22:58:37.096885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.180 [2024-10-11 22:58:37.096910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.180 [2024-10-11 22:58:37.096936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.180 [2024-10-11 22:58:37.096962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.096975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.180 [2024-10-11 22:58:37.096987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.097000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.180 [2024-10-11 22:58:37.097012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.097026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.180 [2024-10-11 22:58:37.097037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.180 [2024-10-11 22:58:37.097051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.181 [2024-10-11 22:58:37.097543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2267f70 is same with the state(6) to be set 00:35:34.181 [2024-10-11 22:58:37.097586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:34.181 [2024-10-11 22:58:37.097598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:34.181 [2024-10-11 22:58:37.097609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49416 len:8 PRP1 0x0 PRP2 0x0 00:35:34.181 [2024-10-11 22:58:37.097622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097684] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2267f70 was disconnected and freed. reset controller. 00:35:34.181 [2024-10-11 22:58:37.097761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.181 [2024-10-11 22:58:37.097783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.181 [2024-10-11 22:58:37.097812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.181 [2024-10-11 22:58:37.097840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.181 [2024-10-11 22:58:37.097883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.181 [2024-10-11 22:58:37.097900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.181 [2024-10-11 22:58:37.100967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.181 [2024-10-11 22:58:37.100998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.181 [2024-10-11 22:58:37.101642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.181 [2024-10-11 22:58:37.101673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.181 [2024-10-11 22:58:37.101689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.181 [2024-10-11 22:58:37.101920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.181 [2024-10-11 22:58:37.102123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.181 [2024-10-11 22:58:37.102141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.181 [2024-10-11 22:58:37.102154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.181 [2024-10-11 22:58:37.105326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.181 [2024-10-11 22:58:37.114444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.181 [2024-10-11 22:58:37.114884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.181 [2024-10-11 22:58:37.114912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.181 [2024-10-11 22:58:37.114928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.181 [2024-10-11 22:58:37.115144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.181 [2024-10-11 22:58:37.115346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.181 [2024-10-11 22:58:37.115364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.181 [2024-10-11 22:58:37.115376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.181 [2024-10-11 22:58:37.118369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.181 [2024-10-11 22:58:37.127684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.181 [2024-10-11 22:58:37.128089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.181 [2024-10-11 22:58:37.128117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.181 [2024-10-11 22:58:37.128132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.181 [2024-10-11 22:58:37.128348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.181 [2024-10-11 22:58:37.128578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.181 [2024-10-11 22:58:37.128615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.181 [2024-10-11 22:58:37.128629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.181 [2024-10-11 22:58:37.131496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.181 [2024-10-11 22:58:37.140664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.181 [2024-10-11 22:58:37.140985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.181 [2024-10-11 22:58:37.141012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.181 [2024-10-11 22:58:37.141028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.181 [2024-10-11 22:58:37.141243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.181 [2024-10-11 22:58:37.141448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.181 [2024-10-11 22:58:37.141467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.181 [2024-10-11 22:58:37.141479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.181 [2024-10-11 22:58:37.144374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.181 [2024-10-11 22:58:37.153742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.181 [2024-10-11 22:58:37.154085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.181 [2024-10-11 22:58:37.154112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.182 [2024-10-11 22:58:37.154127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.182 [2024-10-11 22:58:37.154335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.182 [2024-10-11 22:58:37.154562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.182 [2024-10-11 22:58:37.154582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.182 [2024-10-11 22:58:37.154595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.182 [2024-10-11 22:58:37.157349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.182 [2024-10-11 22:58:37.166717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.182 [2024-10-11 22:58:37.167028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.182 [2024-10-11 22:58:37.167055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.182 [2024-10-11 22:58:37.167070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.182 [2024-10-11 22:58:37.167286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.182 [2024-10-11 22:58:37.167488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.182 [2024-10-11 22:58:37.167507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.182 [2024-10-11 22:58:37.167520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.182 [2024-10-11 22:58:37.170414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.182 [2024-10-11 22:58:37.179822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.182 [2024-10-11 22:58:37.180229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.182 [2024-10-11 22:58:37.180256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.182 [2024-10-11 22:58:37.180272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.182 [2024-10-11 22:58:37.180512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.182 [2024-10-11 22:58:37.180734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.182 [2024-10-11 22:58:37.180755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.182 [2024-10-11 22:58:37.180767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.182 [2024-10-11 22:58:37.183638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.182 [2024-10-11 22:58:37.192886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.182 [2024-10-11 22:58:37.193260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.182 [2024-10-11 22:58:37.193287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.182 [2024-10-11 22:58:37.193302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.182 [2024-10-11 22:58:37.193518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.182 [2024-10-11 22:58:37.193736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.182 [2024-10-11 22:58:37.193757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.182 [2024-10-11 22:58:37.193770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.182 [2024-10-11 22:58:37.196563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.182 [2024-10-11 22:58:37.205921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.182 [2024-10-11 22:58:37.206263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.182 [2024-10-11 22:58:37.206290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.182 [2024-10-11 22:58:37.206306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.182 [2024-10-11 22:58:37.206540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.182 [2024-10-11 22:58:37.206753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.182 [2024-10-11 22:58:37.206772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.182 [2024-10-11 22:58:37.206784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.182 [2024-10-11 22:58:37.209516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.182 [2024-10-11 22:58:37.218936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.182 [2024-10-11 22:58:37.219336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.182 [2024-10-11 22:58:37.219363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.182 [2024-10-11 22:58:37.219378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.182 [2024-10-11 22:58:37.219603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.182 [2024-10-11 22:58:37.219808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.182 [2024-10-11 22:58:37.219827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.182 [2024-10-11 22:58:37.219843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.182 [2024-10-11 22:58:37.222637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.182 [2024-10-11 22:58:37.232110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.182 [2024-10-11 22:58:37.232489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.182 [2024-10-11 22:58:37.232516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.182 [2024-10-11 22:58:37.232546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.182 [2024-10-11 22:58:37.232803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.182 [2024-10-11 22:58:37.233009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.182 [2024-10-11 22:58:37.233028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.182 [2024-10-11 22:58:37.233041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.182 [2024-10-11 22:58:37.235898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.182 [2024-10-11 22:58:37.245101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.182 [2024-10-11 22:58:37.245447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.182 [2024-10-11 22:58:37.245473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.182 [2024-10-11 22:58:37.245489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.182 [2024-10-11 22:58:37.245747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.182 [2024-10-11 22:58:37.245955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.182 [2024-10-11 22:58:37.245974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.182 [2024-10-11 22:58:37.245986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.182 [2024-10-11 22:58:37.248829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.182 [2024-10-11 22:58:37.258135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.182 [2024-10-11 22:58:37.258481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.182 [2024-10-11 22:58:37.258508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.182 [2024-10-11 22:58:37.258523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.182 [2024-10-11 22:58:37.258782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.182 [2024-10-11 22:58:37.259001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.182 [2024-10-11 22:58:37.259020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.182 [2024-10-11 22:58:37.259032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.182 [2024-10-11 22:58:37.261885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.182 [2024-10-11 22:58:37.271160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.182 [2024-10-11 22:58:37.271563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.182 [2024-10-11 22:58:37.271610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.182 [2024-10-11 22:58:37.271627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.183 [2024-10-11 22:58:37.271858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.183 [2024-10-11 22:58:37.272061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.183 [2024-10-11 22:58:37.272080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.183 [2024-10-11 22:58:37.272092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.183 [2024-10-11 22:58:37.274968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.183 [2024-10-11 22:58:37.284276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.183 [2024-10-11 22:58:37.284680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.183 [2024-10-11 22:58:37.284724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.183 [2024-10-11 22:58:37.284740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.183 [2024-10-11 22:58:37.284981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.183 [2024-10-11 22:58:37.285184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.183 [2024-10-11 22:58:37.285203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.183 [2024-10-11 22:58:37.285215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.183 [2024-10-11 22:58:37.288187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.183 [2024-10-11 22:58:37.297481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.183 [2024-10-11 22:58:37.297916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.183 [2024-10-11 22:58:37.297958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.183 [2024-10-11 22:58:37.297974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.183 [2024-10-11 22:58:37.298200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.183 [2024-10-11 22:58:37.298387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.183 [2024-10-11 22:58:37.298406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.183 [2024-10-11 22:58:37.298418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.183 [2024-10-11 22:58:37.301237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.183 [2024-10-11 22:58:37.310475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.183 [2024-10-11 22:58:37.310802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.183 [2024-10-11 22:58:37.310830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.183 [2024-10-11 22:58:37.310846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.183 [2024-10-11 22:58:37.311072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.183 [2024-10-11 22:58:37.311279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.183 [2024-10-11 22:58:37.311298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.183 [2024-10-11 22:58:37.311311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.183 [2024-10-11 22:58:37.314204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.183 [2024-10-11 22:58:37.323624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.183 [2024-10-11 22:58:37.324000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.183 [2024-10-11 22:58:37.324027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.183 [2024-10-11 22:58:37.324042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.183 [2024-10-11 22:58:37.324270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.183 [2024-10-11 22:58:37.324473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.183 [2024-10-11 22:58:37.324491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.183 [2024-10-11 22:58:37.324503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.183 [2024-10-11 22:58:37.327363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.183 [2024-10-11 22:58:37.336783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.183 [2024-10-11 22:58:37.337146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.183 [2024-10-11 22:58:37.337174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.183 [2024-10-11 22:58:37.337190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.183 [2024-10-11 22:58:37.337424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.183 [2024-10-11 22:58:37.337675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.183 [2024-10-11 22:58:37.337697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.183 [2024-10-11 22:58:37.337709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.183 [2024-10-11 22:58:37.340674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.183 [2024-10-11 22:58:37.349889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.183 [2024-10-11 22:58:37.350279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.183 [2024-10-11 22:58:37.350307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.183 [2024-10-11 22:58:37.350322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.183 [2024-10-11 22:58:37.350575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.183 [2024-10-11 22:58:37.350824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.183 [2024-10-11 22:58:37.350845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.183 [2024-10-11 22:58:37.350861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.183 [2024-10-11 22:58:37.354481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.183 [2024-10-11 22:58:37.363676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.183 [2024-10-11 22:58:37.363989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.183 [2024-10-11 22:58:37.364017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.183 [2024-10-11 22:58:37.364032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.183 [2024-10-11 22:58:37.364254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.183 [2024-10-11 22:58:37.364479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.183 [2024-10-11 22:58:37.364499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.183 [2024-10-11 22:58:37.364511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.183 [2024-10-11 22:58:37.367426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.183 [2024-10-11 22:58:37.376852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.183 [2024-10-11 22:58:37.377241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.183 [2024-10-11 22:58:37.377268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.183 [2024-10-11 22:58:37.377284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.183 [2024-10-11 22:58:37.377499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.183 [2024-10-11 22:58:37.377741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.183 [2024-10-11 22:58:37.377763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.183 [2024-10-11 22:58:37.377776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.183 [2024-10-11 22:58:37.380711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.183 [2024-10-11 22:58:37.389881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.183 [2024-10-11 22:58:37.390226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.183 [2024-10-11 22:58:37.390254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.183 [2024-10-11 22:58:37.390269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.183 [2024-10-11 22:58:37.390503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.183 [2024-10-11 22:58:37.390734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.183 [2024-10-11 22:58:37.390754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.183 [2024-10-11 22:58:37.390766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.183 [2024-10-11 22:58:37.393618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.183 [2024-10-11 22:58:37.403026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.183 [2024-10-11 22:58:37.403431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.183 [2024-10-11 22:58:37.403459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.183 [2024-10-11 22:58:37.403483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.183 [2024-10-11 22:58:37.403728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.183 [2024-10-11 22:58:37.403950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.183 [2024-10-11 22:58:37.403969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.183 [2024-10-11 22:58:37.403981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.183 [2024-10-11 22:58:37.406822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.183 [2024-10-11 22:58:37.416230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.183 [2024-10-11 22:58:37.416636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.183 [2024-10-11 22:58:37.416664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.183 [2024-10-11 22:58:37.416679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.184 [2024-10-11 22:58:37.416908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.184 [2024-10-11 22:58:37.417111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.184 [2024-10-11 22:58:37.417130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.184 [2024-10-11 22:58:37.417143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.184 [2024-10-11 22:58:37.420016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.184 [2024-10-11 22:58:37.429364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.184 [2024-10-11 22:58:37.429737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.184 [2024-10-11 22:58:37.429766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.184 [2024-10-11 22:58:37.429783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.184 [2024-10-11 22:58:37.430035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.184 [2024-10-11 22:58:37.430236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.184 [2024-10-11 22:58:37.430255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.184 [2024-10-11 22:58:37.430267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.184 [2024-10-11 22:58:37.433127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.443 [2024-10-11 22:58:37.442894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.443 [2024-10-11 22:58:37.443296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-10-11 22:58:37.443324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.443 [2024-10-11 22:58:37.443340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.443 [2024-10-11 22:58:37.443579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.443 [2024-10-11 22:58:37.443821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.443 [2024-10-11 22:58:37.443862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.443 [2024-10-11 22:58:37.443875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.443 [2024-10-11 22:58:37.447023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.443 [2024-10-11 22:58:37.456374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.443 [2024-10-11 22:58:37.456815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-10-11 22:58:37.456844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.443 [2024-10-11 22:58:37.456875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.443 [2024-10-11 22:58:37.457108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.443 [2024-10-11 22:58:37.457310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.443 [2024-10-11 22:58:37.457329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.443 [2024-10-11 22:58:37.457341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.443 [2024-10-11 22:58:37.460239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.443 7507.33 IOPS, 29.33 MiB/s [2024-10-11T20:58:37.711Z] [2024-10-11 22:58:37.470696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.443 [2024-10-11 22:58:37.471100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-10-11 22:58:37.471128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.443 [2024-10-11 22:58:37.471143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.443 [2024-10-11 22:58:37.471371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.443 [2024-10-11 22:58:37.471600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.443 [2024-10-11 22:58:37.471621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.443 [2024-10-11 22:58:37.471633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.443 [2024-10-11 22:58:37.474464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.443 [2024-10-11 22:58:37.483900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.443 [2024-10-11 22:58:37.484244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-10-11 22:58:37.484271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.443 [2024-10-11 22:58:37.484287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.443 [2024-10-11 22:58:37.484520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.443 [2024-10-11 22:58:37.484722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.443 [2024-10-11 22:58:37.484743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.443 [2024-10-11 22:58:37.484755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.443 [2024-10-11 22:58:37.487547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.443 [2024-10-11 22:58:37.496924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.443 [2024-10-11 22:58:37.497261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-10-11 22:58:37.497289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.443 [2024-10-11 22:58:37.497305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.443 [2024-10-11 22:58:37.497533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.443 [2024-10-11 22:58:37.497744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.443 [2024-10-11 22:58:37.497764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.443 [2024-10-11 22:58:37.497776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.443 [2024-10-11 22:58:37.500512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.443 [2024-10-11 22:58:37.509871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.443 [2024-10-11 22:58:37.510213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-10-11 22:58:37.510241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.443 [2024-10-11 22:58:37.510256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.443 [2024-10-11 22:58:37.510490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.443 [2024-10-11 22:58:37.510719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.443 [2024-10-11 22:58:37.510740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.443 [2024-10-11 22:58:37.510752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.443 [2024-10-11 22:58:37.513601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.443 [2024-10-11 22:58:37.523027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.443 [2024-10-11 22:58:37.523369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-10-11 22:58:37.523396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.443 [2024-10-11 22:58:37.523412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.443 [2024-10-11 22:58:37.523658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.443 [2024-10-11 22:58:37.523882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.443 [2024-10-11 22:58:37.523901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.443 [2024-10-11 22:58:37.523912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.443 [2024-10-11 22:58:37.526753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.443 [2024-10-11 22:58:37.536079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.443 [2024-10-11 22:58:37.536530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-10-11 22:58:37.536588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.443 [2024-10-11 22:58:37.536604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.443 [2024-10-11 22:58:37.536850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.443 [2024-10-11 22:58:37.537038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.444 [2024-10-11 22:58:37.537057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.444 [2024-10-11 22:58:37.537069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.444 [2024-10-11 22:58:37.539812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.444 [2024-10-11 22:58:37.549167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.444 [2024-10-11 22:58:37.549570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-10-11 22:58:37.549614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.444 [2024-10-11 22:58:37.549629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.444 [2024-10-11 22:58:37.549868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.444 [2024-10-11 22:58:37.550055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.444 [2024-10-11 22:58:37.550073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.444 [2024-10-11 22:58:37.550085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.444 [2024-10-11 22:58:37.552827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.444 [2024-10-11 22:58:37.562208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.444 [2024-10-11 22:58:37.562607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-10-11 22:58:37.562635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.444 [2024-10-11 22:58:37.562651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.444 [2024-10-11 22:58:37.562876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.444 [2024-10-11 22:58:37.563078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.444 [2024-10-11 22:58:37.563096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.444 [2024-10-11 22:58:37.563109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.444 [2024-10-11 22:58:37.565961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.444 [2024-10-11 22:58:37.575349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.444 [2024-10-11 22:58:37.575713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-10-11 22:58:37.575742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.444 [2024-10-11 22:58:37.575758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.444 [2024-10-11 22:58:37.576002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.444 [2024-10-11 22:58:37.576189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.444 [2024-10-11 22:58:37.576207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.444 [2024-10-11 22:58:37.576224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.444 [2024-10-11 22:58:37.579045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.444 [2024-10-11 22:58:37.588570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.444 [2024-10-11 22:58:37.588920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-10-11 22:58:37.588948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.444 [2024-10-11 22:58:37.588964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.444 [2024-10-11 22:58:37.589198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.444 [2024-10-11 22:58:37.589401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.444 [2024-10-11 22:58:37.589420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.444 [2024-10-11 22:58:37.589433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.444 [2024-10-11 22:58:37.592286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.444 [2024-10-11 22:58:37.601638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.444 [2024-10-11 22:58:37.602015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-10-11 22:58:37.602042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.444 [2024-10-11 22:58:37.602058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.444 [2024-10-11 22:58:37.602292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.444 [2024-10-11 22:58:37.602525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.444 [2024-10-11 22:58:37.602546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.444 [2024-10-11 22:58:37.602586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.444 [2024-10-11 22:58:37.606074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.444 [2024-10-11 22:58:37.614946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.444 [2024-10-11 22:58:37.615350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-10-11 22:58:37.615378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.444 [2024-10-11 22:58:37.615393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.444 [2024-10-11 22:58:37.615625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.444 [2024-10-11 22:58:37.615855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.444 [2024-10-11 22:58:37.615874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.444 [2024-10-11 22:58:37.615885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.444 [2024-10-11 22:58:37.618790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.444 [2024-10-11 22:58:37.628220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.444 [2024-10-11 22:58:37.628634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-10-11 22:58:37.628663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.444 [2024-10-11 22:58:37.628679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.444 [2024-10-11 22:58:37.628912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.444 [2024-10-11 22:58:37.629115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.444 [2024-10-11 22:58:37.629133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.444 [2024-10-11 22:58:37.629146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.444 [2024-10-11 22:58:37.632005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.444 [2024-10-11 22:58:37.641199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.444 [2024-10-11 22:58:37.641603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-10-11 22:58:37.641631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.444 [2024-10-11 22:58:37.641647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.444 [2024-10-11 22:58:37.641882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.444 [2024-10-11 22:58:37.642085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.444 [2024-10-11 22:58:37.642103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.444 [2024-10-11 22:58:37.642115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.444 [2024-10-11 22:58:37.644855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.444 [2024-10-11 22:58:37.654217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.444 [2024-10-11 22:58:37.654620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-10-11 22:58:37.654647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.444 [2024-10-11 22:58:37.654662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.444 [2024-10-11 22:58:37.654877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.444 [2024-10-11 22:58:37.655079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.444 [2024-10-11 22:58:37.655097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.444 [2024-10-11 22:58:37.655109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.444 [2024-10-11 22:58:37.657887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.444 [2024-10-11 22:58:37.667283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.444 [2024-10-11 22:58:37.667587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-10-11 22:58:37.667614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.444 [2024-10-11 22:58:37.667629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.444 [2024-10-11 22:58:37.667844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.444 [2024-10-11 22:58:37.668048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.444 [2024-10-11 22:58:37.668067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.444 [2024-10-11 22:58:37.668080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.444 [2024-10-11 22:58:37.670820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.444 [2024-10-11 22:58:37.680335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.444 [2024-10-11 22:58:37.680633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-10-11 22:58:37.680675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.444 [2024-10-11 22:58:37.680691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.444 [2024-10-11 22:58:37.680908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.445 [2024-10-11 22:58:37.681111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.445 [2024-10-11 22:58:37.681130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.445 [2024-10-11 22:58:37.681142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.445 [2024-10-11 22:58:37.683880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.445 [2024-10-11 22:58:37.693438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.445 [2024-10-11 22:58:37.693835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-10-11 22:58:37.693862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.445 [2024-10-11 22:58:37.693878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.445 [2024-10-11 22:58:37.694109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.445 [2024-10-11 22:58:37.694312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.445 [2024-10-11 22:58:37.694331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.445 [2024-10-11 22:58:37.694343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.445 [2024-10-11 22:58:37.697200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.445 [2024-10-11 22:58:37.706771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.445 [2024-10-11 22:58:37.707149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-10-11 22:58:37.707176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.445 [2024-10-11 22:58:37.707191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.445 [2024-10-11 22:58:37.707406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.445 [2024-10-11 22:58:37.707652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.445 [2024-10-11 22:58:37.707672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.445 [2024-10-11 22:58:37.707690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.704 [2024-10-11 22:58:37.710963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.704 [2024-10-11 22:58:37.719917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.704 [2024-10-11 22:58:37.720286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.704 [2024-10-11 22:58:37.720314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.704 [2024-10-11 22:58:37.720329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.704 [2024-10-11 22:58:37.720543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.704 [2024-10-11 22:58:37.720762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.704 [2024-10-11 22:58:37.720782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.704 [2024-10-11 22:58:37.720794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.704 [2024-10-11 22:58:37.723645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.704 [2024-10-11 22:58:37.733054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.704 [2024-10-11 22:58:37.733450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.704 [2024-10-11 22:58:37.733502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.704 [2024-10-11 22:58:37.733518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.704 [2024-10-11 22:58:37.733785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.704 [2024-10-11 22:58:37.733991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.704 [2024-10-11 22:58:37.734010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.704 [2024-10-11 22:58:37.734022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.704 [2024-10-11 22:58:37.736799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.704 [2024-10-11 22:58:37.746001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.704 [2024-10-11 22:58:37.746375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.704 [2024-10-11 22:58:37.746402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.704 [2024-10-11 22:58:37.746416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.704 [2024-10-11 22:58:37.746664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.704 [2024-10-11 22:58:37.746888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.704 [2024-10-11 22:58:37.746907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.704 [2024-10-11 22:58:37.746919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.704 [2024-10-11 22:58:37.749755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.704 [2024-10-11 22:58:37.759253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.704 [2024-10-11 22:58:37.759628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.704 [2024-10-11 22:58:37.759662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.704 [2024-10-11 22:58:37.759679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.704 [2024-10-11 22:58:37.759920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.704 [2024-10-11 22:58:37.760130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.704 [2024-10-11 22:58:37.760149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.704 [2024-10-11 22:58:37.760162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.704 [2024-10-11 22:58:37.763202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.704 [2024-10-11 22:58:37.772498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.704 [2024-10-11 22:58:37.772832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.704 [2024-10-11 22:58:37.772874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.704 [2024-10-11 22:58:37.772890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.704 [2024-10-11 22:58:37.773105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.704 [2024-10-11 22:58:37.773309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.704 [2024-10-11 22:58:37.773327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.704 [2024-10-11 22:58:37.773339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.704 [2024-10-11 22:58:37.776393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.704 [2024-10-11 22:58:37.785680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.704 [2024-10-11 22:58:37.786063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.704 [2024-10-11 22:58:37.786090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.704 [2024-10-11 22:58:37.786105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.704 [2024-10-11 22:58:37.786353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.704 [2024-10-11 22:58:37.786606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.704 [2024-10-11 22:58:37.786628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.704 [2024-10-11 22:58:37.786641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.704 [2024-10-11 22:58:37.789944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.704 [2024-10-11 22:58:37.798867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.704 [2024-10-11 22:58:37.799278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.704 [2024-10-11 22:58:37.799306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.704 [2024-10-11 22:58:37.799322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.704 [2024-10-11 22:58:37.799572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.704 [2024-10-11 22:58:37.799792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.704 [2024-10-11 22:58:37.799812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.704 [2024-10-11 22:58:37.799825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.704 [2024-10-11 22:58:37.803100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.704 [2024-10-11 22:58:37.812396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.704 [2024-10-11 22:58:37.812717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.704 [2024-10-11 22:58:37.812746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.704 [2024-10-11 22:58:37.812763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.704 [2024-10-11 22:58:37.813005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.704 [2024-10-11 22:58:37.813219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.704 [2024-10-11 22:58:37.813239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.704 [2024-10-11 22:58:37.813253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.704 [2024-10-11 22:58:37.816506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.704 [2024-10-11 22:58:37.825995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.704 [2024-10-11 22:58:37.826381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.704 [2024-10-11 22:58:37.826431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.704 [2024-10-11 22:58:37.826448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.704 [2024-10-11 22:58:37.826671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.704 [2024-10-11 22:58:37.826916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.704 [2024-10-11 22:58:37.826937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.704 [2024-10-11 22:58:37.826949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.704 [2024-10-11 22:58:37.830193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.705 [2024-10-11 22:58:37.839581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.705 [2024-10-11 22:58:37.840023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.705 [2024-10-11 22:58:37.840051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.705 [2024-10-11 22:58:37.840067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.705 [2024-10-11 22:58:37.840307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.705 [2024-10-11 22:58:37.840522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.705 [2024-10-11 22:58:37.840567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.705 [2024-10-11 22:58:37.840583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.705 [2024-10-11 22:58:37.843789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.705 [2024-10-11 22:58:37.853166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.705 [2024-10-11 22:58:37.853571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.705 [2024-10-11 22:58:37.853600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.705 [2024-10-11 22:58:37.853616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.705 [2024-10-11 22:58:37.853830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.705 [2024-10-11 22:58:37.854058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.705 [2024-10-11 22:58:37.854099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.705 [2024-10-11 22:58:37.854113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.705 [2024-10-11 22:58:37.857470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.705 [2024-10-11 22:58:37.866507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.705 [2024-10-11 22:58:37.866839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.705 [2024-10-11 22:58:37.866882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.705 [2024-10-11 22:58:37.866898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.705 [2024-10-11 22:58:37.867120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.705 [2024-10-11 22:58:37.867322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.705 [2024-10-11 22:58:37.867341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.705 [2024-10-11 22:58:37.867353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.705 [2024-10-11 22:58:37.870670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.705 [2024-10-11 22:58:37.879886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.705 [2024-10-11 22:58:37.880341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.705 [2024-10-11 22:58:37.880378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.705 [2024-10-11 22:58:37.880411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.705 [2024-10-11 22:58:37.880830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.705 [2024-10-11 22:58:37.881068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.705 [2024-10-11 22:58:37.881101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.705 [2024-10-11 22:58:37.881113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.705 [2024-10-11 22:58:37.884130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.705 [2024-10-11 22:58:37.893186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.705 [2024-10-11 22:58:37.893529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.705 [2024-10-11 22:58:37.893579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.705 [2024-10-11 22:58:37.893602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.705 [2024-10-11 22:58:37.893816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.705 [2024-10-11 22:58:37.894023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.705 [2024-10-11 22:58:37.894042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.705 [2024-10-11 22:58:37.894053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.705 [2024-10-11 22:58:37.897080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.705 [2024-10-11 22:58:37.906435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.705 [2024-10-11 22:58:37.906826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.705 [2024-10-11 22:58:37.906854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.705 [2024-10-11 22:58:37.906870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.705 [2024-10-11 22:58:37.907129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.705 [2024-10-11 22:58:37.907317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.705 [2024-10-11 22:58:37.907336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.705 [2024-10-11 22:58:37.907348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.705 [2024-10-11 22:58:37.910338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.705 [2024-10-11 22:58:37.919696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.705 [2024-10-11 22:58:37.920198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.705 [2024-10-11 22:58:37.920251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.705 [2024-10-11 22:58:37.920266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.705 [2024-10-11 22:58:37.920488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.705 [2024-10-11 22:58:37.920707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.705 [2024-10-11 22:58:37.920727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.705 [2024-10-11 22:58:37.920740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.705 [2024-10-11 22:58:37.923591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.705 [2024-10-11 22:58:37.932722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.705 [2024-10-11 22:58:37.933005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.705 [2024-10-11 22:58:37.933046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.705 [2024-10-11 22:58:37.933061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.705 [2024-10-11 22:58:37.933257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.705 [2024-10-11 22:58:37.933477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.705 [2024-10-11 22:58:37.933500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.705 [2024-10-11 22:58:37.933513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.705 [2024-10-11 22:58:37.936369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.705 [2024-10-11 22:58:37.945828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.705 [2024-10-11 22:58:37.946184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.705 [2024-10-11 22:58:37.946212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.705 [2024-10-11 22:58:37.946228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.705 [2024-10-11 22:58:37.946462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.705 [2024-10-11 22:58:37.946693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.705 [2024-10-11 22:58:37.946714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.705 [2024-10-11 22:58:37.946726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.705 [2024-10-11 22:58:37.949519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.705 [2024-10-11 22:58:37.958927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.705 [2024-10-11 22:58:37.959298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.705 [2024-10-11 22:58:37.959325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.705 [2024-10-11 22:58:37.959340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.705 [2024-10-11 22:58:37.959565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.705 [2024-10-11 22:58:37.959769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.705 [2024-10-11 22:58:37.959788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.705 [2024-10-11 22:58:37.959800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.705 [2024-10-11 22:58:37.962532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.965 [2024-10-11 22:58:37.972486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.965 [2024-10-11 22:58:37.972852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.965 [2024-10-11 22:58:37.972880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.965 [2024-10-11 22:58:37.972911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.965 [2024-10-11 22:58:37.973140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.965 [2024-10-11 22:58:37.973328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.965 [2024-10-11 22:58:37.973347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.965 [2024-10-11 22:58:37.973359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.965 [2024-10-11 22:58:37.976474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.965 [2024-10-11 22:58:37.985694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.965 [2024-10-11 22:58:37.986056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.965 [2024-10-11 22:58:37.986082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.965 [2024-10-11 22:58:37.986097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.965 [2024-10-11 22:58:37.986327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.965 [2024-10-11 22:58:37.986544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.965 [2024-10-11 22:58:37.986573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.965 [2024-10-11 22:58:37.986585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.965 [2024-10-11 22:58:37.989342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.965 [2024-10-11 22:58:37.998705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.965 [2024-10-11 22:58:37.999013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.965 [2024-10-11 22:58:37.999040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.965 [2024-10-11 22:58:37.999056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.965 [2024-10-11 22:58:37.999271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.965 [2024-10-11 22:58:37.999473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.965 [2024-10-11 22:58:37.999492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.965 [2024-10-11 22:58:37.999504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.965 [2024-10-11 22:58:38.002367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.965 [2024-10-11 22:58:38.011797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.965 [2024-10-11 22:58:38.012154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.965 [2024-10-11 22:58:38.012180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.965 [2024-10-11 22:58:38.012196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.965 [2024-10-11 22:58:38.012410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.965 [2024-10-11 22:58:38.012639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.965 [2024-10-11 22:58:38.012660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.965 [2024-10-11 22:58:38.012672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.965 [2024-10-11 22:58:38.015501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.965 [2024-10-11 22:58:38.024967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.965 [2024-10-11 22:58:38.025309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.965 [2024-10-11 22:58:38.025337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.965 [2024-10-11 22:58:38.025353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.965 [2024-10-11 22:58:38.025602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.965 [2024-10-11 22:58:38.025802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.965 [2024-10-11 22:58:38.025822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.965 [2024-10-11 22:58:38.025835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.965 [2024-10-11 22:58:38.028696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.965 [2024-10-11 22:58:38.038171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.965 [2024-10-11 22:58:38.038518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.965 [2024-10-11 22:58:38.038558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.965 [2024-10-11 22:58:38.038576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.965 [2024-10-11 22:58:38.038809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.965 [2024-10-11 22:58:38.039018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.965 [2024-10-11 22:58:38.039038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.965 [2024-10-11 22:58:38.039050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.965 [2024-10-11 22:58:38.041986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.965 [2024-10-11 22:58:38.051321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.965 [2024-10-11 22:58:38.051714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.965 [2024-10-11 22:58:38.051752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.965 [2024-10-11 22:58:38.051768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.965 [2024-10-11 22:58:38.052002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.965 [2024-10-11 22:58:38.052209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.965 [2024-10-11 22:58:38.052228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.965 [2024-10-11 22:58:38.052241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.965 [2024-10-11 22:58:38.055082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.965 [2024-10-11 22:58:38.064547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.965 [2024-10-11 22:58:38.064879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.965 [2024-10-11 22:58:38.064906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.965 [2024-10-11 22:58:38.064923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.965 [2024-10-11 22:58:38.065144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.965 [2024-10-11 22:58:38.065353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.965 [2024-10-11 22:58:38.065373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.965 [2024-10-11 22:58:38.065393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.965 [2024-10-11 22:58:38.068472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.965 [2024-10-11 22:58:38.077943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.965 [2024-10-11 22:58:38.078290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.965 [2024-10-11 22:58:38.078318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.965 [2024-10-11 22:58:38.078334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.965 [2024-10-11 22:58:38.078576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.965 [2024-10-11 22:58:38.078801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.965 [2024-10-11 22:58:38.078838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.965 [2024-10-11 22:58:38.078851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.965 [2024-10-11 22:58:38.081926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.965 [2024-10-11 22:58:38.091161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.966 [2024-10-11 22:58:38.091519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.966 [2024-10-11 22:58:38.091565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.966 [2024-10-11 22:58:38.091583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.966 [2024-10-11 22:58:38.091824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.966 [2024-10-11 22:58:38.092032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.966 [2024-10-11 22:58:38.092051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.966 [2024-10-11 22:58:38.092064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.966 [2024-10-11 22:58:38.095003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.966 [2024-10-11 22:58:38.104546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.966 [2024-10-11 22:58:38.104954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.966 [2024-10-11 22:58:38.104993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.966 [2024-10-11 22:58:38.105009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.966 [2024-10-11 22:58:38.105247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.966 [2024-10-11 22:58:38.105451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.966 [2024-10-11 22:58:38.105472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.966 [2024-10-11 22:58:38.105500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.966 [2024-10-11 22:58:38.108791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.966 [2024-10-11 22:58:38.117909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.966 [2024-10-11 22:58:38.118280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.966 [2024-10-11 22:58:38.118307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.966 [2024-10-11 22:58:38.118323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.966 [2024-10-11 22:58:38.118765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.966 [2024-10-11 22:58:38.118998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.966 [2024-10-11 22:58:38.119017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.966 [2024-10-11 22:58:38.119030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.966 [2024-10-11 22:58:38.122061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.966 [2024-10-11 22:58:38.131285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.966 [2024-10-11 22:58:38.131633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.966 [2024-10-11 22:58:38.131662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.966 [2024-10-11 22:58:38.131679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.966 [2024-10-11 22:58:38.131920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.966 [2024-10-11 22:58:38.132108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.966 [2024-10-11 22:58:38.132127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.966 [2024-10-11 22:58:38.132139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.966 [2024-10-11 22:58:38.135140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.966 [2024-10-11 22:58:38.144548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.966 [2024-10-11 22:58:38.144915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.966 [2024-10-11 22:58:38.144943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.966 [2024-10-11 22:58:38.144959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.966 [2024-10-11 22:58:38.145195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.966 [2024-10-11 22:58:38.145383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.966 [2024-10-11 22:58:38.145402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.966 [2024-10-11 22:58:38.145413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.966 [2024-10-11 22:58:38.148412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.966 [2024-10-11 22:58:38.157780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.966 [2024-10-11 22:58:38.158210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.966 [2024-10-11 22:58:38.158238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.966 [2024-10-11 22:58:38.158253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.966 [2024-10-11 22:58:38.158491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.966 [2024-10-11 22:58:38.158726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.966 [2024-10-11 22:58:38.158747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.966 [2024-10-11 22:58:38.158759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.966 [2024-10-11 22:58:38.161595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.966 [2024-10-11 22:58:38.170793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.966 [2024-10-11 22:58:38.171105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.966 [2024-10-11 22:58:38.171131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.966 [2024-10-11 22:58:38.171146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.966 [2024-10-11 22:58:38.171363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.966 [2024-10-11 22:58:38.171595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.966 [2024-10-11 22:58:38.171617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.966 [2024-10-11 22:58:38.171630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.966 [2024-10-11 22:58:38.174400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.966 [2024-10-11 22:58:38.183768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.966 [2024-10-11 22:58:38.184119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.966 [2024-10-11 22:58:38.184146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.966 [2024-10-11 22:58:38.184162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.966 [2024-10-11 22:58:38.184395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.966 [2024-10-11 22:58:38.184624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.966 [2024-10-11 22:58:38.184645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.966 [2024-10-11 22:58:38.184658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.966 [2024-10-11 22:58:38.187410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.966 [2024-10-11 22:58:38.196779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.966 [2024-10-11 22:58:38.197086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.966 [2024-10-11 22:58:38.197113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.966 [2024-10-11 22:58:38.197129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.966 [2024-10-11 22:58:38.197343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.966 [2024-10-11 22:58:38.197547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.966 [2024-10-11 22:58:38.197580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.966 [2024-10-11 22:58:38.197592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.966 [2024-10-11 22:58:38.200352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.966 [2024-10-11 22:58:38.209879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.966 [2024-10-11 22:58:38.210186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.966 [2024-10-11 22:58:38.210213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.966 [2024-10-11 22:58:38.210228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.966 [2024-10-11 22:58:38.210444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.966 [2024-10-11 22:58:38.210677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.966 [2024-10-11 22:58:38.210697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.966 [2024-10-11 22:58:38.210710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.966 [2024-10-11 22:58:38.213460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.966 [2024-10-11 22:58:38.223063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.966 [2024-10-11 22:58:38.223406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.966 [2024-10-11 22:58:38.223435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:34.966 [2024-10-11 22:58:38.223450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:34.966 [2024-10-11 22:58:38.223712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:34.966 [2024-10-11 22:58:38.223933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.967 [2024-10-11 22:58:38.223952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.967 [2024-10-11 22:58:38.223964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.967 [2024-10-11 22:58:38.226814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.226 [2024-10-11 22:58:38.236363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.226 [2024-10-11 22:58:38.236825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.226 [2024-10-11 22:58:38.236852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.226 [2024-10-11 22:58:38.236868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.226 [2024-10-11 22:58:38.237136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.226 [2024-10-11 22:58:38.237360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.226 [2024-10-11 22:58:38.237381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.226 [2024-10-11 22:58:38.237395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.226 [2024-10-11 22:58:38.240411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.226 [2024-10-11 22:58:38.249417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.226 [2024-10-11 22:58:38.249762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.226 [2024-10-11 22:58:38.249794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.226 [2024-10-11 22:58:38.249816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.226 [2024-10-11 22:58:38.250043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.226 [2024-10-11 22:58:38.250230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.226 [2024-10-11 22:58:38.250248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.226 [2024-10-11 22:58:38.250260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.226 [2024-10-11 22:58:38.253122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.226 [2024-10-11 22:58:38.262513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.226 [2024-10-11 22:58:38.262837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.226 [2024-10-11 22:58:38.262864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.226 [2024-10-11 22:58:38.262879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.226 [2024-10-11 22:58:38.263080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.226 [2024-10-11 22:58:38.263300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.226 [2024-10-11 22:58:38.263319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.226 [2024-10-11 22:58:38.263331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.226 [2024-10-11 22:58:38.266178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.226 [2024-10-11 22:58:38.275530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.226 [2024-10-11 22:58:38.275947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.226 [2024-10-11 22:58:38.275975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.226 [2024-10-11 22:58:38.275991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.226 [2024-10-11 22:58:38.276224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.226 [2024-10-11 22:58:38.276427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.226 [2024-10-11 22:58:38.276446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.226 [2024-10-11 22:58:38.276459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.226 [2024-10-11 22:58:38.279199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.226 [2024-10-11 22:58:38.288564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.226 [2024-10-11 22:58:38.288987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.226 [2024-10-11 22:58:38.289014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.226 [2024-10-11 22:58:38.289032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.226 [2024-10-11 22:58:38.289266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.226 [2024-10-11 22:58:38.289474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.226 [2024-10-11 22:58:38.289493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.226 [2024-10-11 22:58:38.289505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.226 [2024-10-11 22:58:38.292367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.226 [2024-10-11 22:58:38.301796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.226 [2024-10-11 22:58:38.302213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.226 [2024-10-11 22:58:38.302241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.226 [2024-10-11 22:58:38.302259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.226 [2024-10-11 22:58:38.302494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.226 [2024-10-11 22:58:38.302698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.226 [2024-10-11 22:58:38.302719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.226 [2024-10-11 22:58:38.302732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.226 [2024-10-11 22:58:38.305554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.226 [2024-10-11 22:58:38.314869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.226 [2024-10-11 22:58:38.315216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.226 [2024-10-11 22:58:38.315244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.226 [2024-10-11 22:58:38.315261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.226 [2024-10-11 22:58:38.315493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.226 [2024-10-11 22:58:38.315727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.226 [2024-10-11 22:58:38.315748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.226 [2024-10-11 22:58:38.315761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.226 [2024-10-11 22:58:38.318616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.226 [2024-10-11 22:58:38.328041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.226 [2024-10-11 22:58:38.328446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.226 [2024-10-11 22:58:38.328475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.226 [2024-10-11 22:58:38.328491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.226 [2024-10-11 22:58:38.328757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.226 [2024-10-11 22:58:38.328977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.226 [2024-10-11 22:58:38.328997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.226 [2024-10-11 22:58:38.329009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.226 [2024-10-11 22:58:38.331793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.226 [2024-10-11 22:58:38.341013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.226 [2024-10-11 22:58:38.341419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.226 [2024-10-11 22:58:38.341447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.226 [2024-10-11 22:58:38.341463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.226 [2024-10-11 22:58:38.341709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.227 [2024-10-11 22:58:38.341911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.227 [2024-10-11 22:58:38.341932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.227 [2024-10-11 22:58:38.341944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.227 [2024-10-11 22:58:38.344784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.227 [2024-10-11 22:58:38.354196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.227 [2024-10-11 22:58:38.354537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.227 [2024-10-11 22:58:38.354574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.227 [2024-10-11 22:58:38.354590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.227 [2024-10-11 22:58:38.354823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.227 [2024-10-11 22:58:38.355022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.227 [2024-10-11 22:58:38.355042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.227 [2024-10-11 22:58:38.355054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.227 [2024-10-11 22:58:38.358453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.227 [2024-10-11 22:58:38.367431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.227 [2024-10-11 22:58:38.367795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.227 [2024-10-11 22:58:38.367823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.227 [2024-10-11 22:58:38.367839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.227 [2024-10-11 22:58:38.368089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.227 [2024-10-11 22:58:38.368276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.227 [2024-10-11 22:58:38.368296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.227 [2024-10-11 22:58:38.368309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.227 [2024-10-11 22:58:38.371239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.227 [2024-10-11 22:58:38.380438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.227 [2024-10-11 22:58:38.380851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.227 [2024-10-11 22:58:38.380879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.227 [2024-10-11 22:58:38.380900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.227 [2024-10-11 22:58:38.381135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.227 [2024-10-11 22:58:38.381337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.227 [2024-10-11 22:58:38.381357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.227 [2024-10-11 22:58:38.381369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.227 [2024-10-11 22:58:38.384237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.227 [2024-10-11 22:58:38.393657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.227 [2024-10-11 22:58:38.394043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.227 [2024-10-11 22:58:38.394070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.227 [2024-10-11 22:58:38.394086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.227 [2024-10-11 22:58:38.394282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.227 [2024-10-11 22:58:38.394499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.227 [2024-10-11 22:58:38.394519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.227 [2024-10-11 22:58:38.394546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.227 [2024-10-11 22:58:38.397364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.227 [2024-10-11 22:58:38.406748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.227 [2024-10-11 22:58:38.407108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.227 [2024-10-11 22:58:38.407136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.227 [2024-10-11 22:58:38.407152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.227 [2024-10-11 22:58:38.407383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.227 [2024-10-11 22:58:38.407619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.227 [2024-10-11 22:58:38.407641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.227 [2024-10-11 22:58:38.407655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.227 [2024-10-11 22:58:38.410494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.227 [2024-10-11 22:58:38.419826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.227 [2024-10-11 22:58:38.420229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.227 [2024-10-11 22:58:38.420256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.227 [2024-10-11 22:58:38.420272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.227 [2024-10-11 22:58:38.420485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.227 [2024-10-11 22:58:38.420700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.227 [2024-10-11 22:58:38.420725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.227 [2024-10-11 22:58:38.420738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.227 [2024-10-11 22:58:38.423587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.227 [2024-10-11 22:58:38.433010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.227 [2024-10-11 22:58:38.433412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.227 [2024-10-11 22:58:38.433440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.227 [2024-10-11 22:58:38.433456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.227 [2024-10-11 22:58:38.433721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.227 [2024-10-11 22:58:38.433944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.227 [2024-10-11 22:58:38.433964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.227 [2024-10-11 22:58:38.433976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.227 [2024-10-11 22:58:38.436796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.227 [2024-10-11 22:58:38.446043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.227 [2024-10-11 22:58:38.446432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.227 [2024-10-11 22:58:38.446482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.227 [2024-10-11 22:58:38.446498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.227 [2024-10-11 22:58:38.446749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.227 [2024-10-11 22:58:38.446936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.227 [2024-10-11 22:58:38.446956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.227 [2024-10-11 22:58:38.446968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.227 [2024-10-11 22:58:38.449808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.227 [2024-10-11 22:58:38.459138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.227 [2024-10-11 22:58:38.459606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.227 [2024-10-11 22:58:38.459636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.227 [2024-10-11 22:58:38.459652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.227 [2024-10-11 22:58:38.459905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.227 [2024-10-11 22:58:38.460108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.227 [2024-10-11 22:58:38.460128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.227 [2024-10-11 22:58:38.460141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.227 [2024-10-11 22:58:38.462967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.227 5630.50 IOPS, 21.99 MiB/s [2024-10-11T20:58:38.495Z] [2024-10-11 22:58:38.473608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.227 [2024-10-11 22:58:38.473984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.227 [2024-10-11 22:58:38.474013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.227 [2024-10-11 22:58:38.474028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.227 [2024-10-11 22:58:38.474243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.227 [2024-10-11 22:58:38.474446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.227 [2024-10-11 22:58:38.474466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.227 [2024-10-11 22:58:38.474478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.227 [2024-10-11 22:58:38.477340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.227 [2024-10-11 22:58:38.486767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.227 [2024-10-11 22:58:38.487189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.227 [2024-10-11 22:58:38.487216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.227 [2024-10-11 22:58:38.487232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.228 [2024-10-11 22:58:38.487466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.228 [2024-10-11 22:58:38.487699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.228 [2024-10-11 22:58:38.487721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.228 [2024-10-11 22:58:38.487734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.228 [2024-10-11 22:58:38.490802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.487 [2024-10-11 22:58:38.500282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.487 [2024-10-11 22:58:38.500686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.487 [2024-10-11 22:58:38.500714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.487 [2024-10-11 22:58:38.500729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.487 [2024-10-11 22:58:38.500957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.487 [2024-10-11 22:58:38.501160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.487 [2024-10-11 22:58:38.501180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.487 [2024-10-11 22:58:38.501193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.487 [2024-10-11 22:58:38.504126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.487 [2024-10-11 22:58:38.513359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.487 [2024-10-11 22:58:38.513728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.487 [2024-10-11 22:58:38.513758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.487 [2024-10-11 22:58:38.513774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.487 [2024-10-11 22:58:38.514029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.487 [2024-10-11 22:58:38.514231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.487 [2024-10-11 22:58:38.514251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.487 [2024-10-11 22:58:38.514263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.487 [2024-10-11 22:58:38.517087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.487 [2024-10-11 22:58:38.526309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.487 [2024-10-11 22:58:38.526685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.487 [2024-10-11 22:58:38.526713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.487 [2024-10-11 22:58:38.526729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.487 [2024-10-11 22:58:38.526944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.487 [2024-10-11 22:58:38.527147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.487 [2024-10-11 22:58:38.527168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.487 [2024-10-11 22:58:38.527180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.487 [2024-10-11 22:58:38.530042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.487 [2024-10-11 22:58:38.539442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.487 [2024-10-11 22:58:38.539875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.487 [2024-10-11 22:58:38.539903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.487 [2024-10-11 22:58:38.539919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.487 [2024-10-11 22:58:38.540152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.487 [2024-10-11 22:58:38.540354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.487 [2024-10-11 22:58:38.540375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.487 [2024-10-11 22:58:38.540388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.487 [2024-10-11 22:58:38.543214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.487 [2024-10-11 22:58:38.552423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.487 [2024-10-11 22:58:38.552770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.487 [2024-10-11 22:58:38.552797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.487 [2024-10-11 22:58:38.552812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.487 [2024-10-11 22:58:38.553025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.487 [2024-10-11 22:58:38.553227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.487 [2024-10-11 22:58:38.553247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.487 [2024-10-11 22:58:38.553264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.487 [2024-10-11 22:58:38.556130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.487 [2024-10-11 22:58:38.565519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.487 [2024-10-11 22:58:38.565890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.487 [2024-10-11 22:58:38.565919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.487 [2024-10-11 22:58:38.565934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.487 [2024-10-11 22:58:38.566168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.487 [2024-10-11 22:58:38.566370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.487 [2024-10-11 22:58:38.566390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.487 [2024-10-11 22:58:38.566404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.487 [2024-10-11 22:58:38.569228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.487 [2024-10-11 22:58:38.578639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.487 [2024-10-11 22:58:38.579009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.487 [2024-10-11 22:58:38.579036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.487 [2024-10-11 22:58:38.579051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.487 [2024-10-11 22:58:38.579268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.487 [2024-10-11 22:58:38.579471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.487 [2024-10-11 22:58:38.579501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.487 [2024-10-11 22:58:38.579513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.487 [2024-10-11 22:58:38.582340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.487 [2024-10-11 22:58:38.591826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.487 [2024-10-11 22:58:38.592167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.487 [2024-10-11 22:58:38.592194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.487 [2024-10-11 22:58:38.592209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.487 [2024-10-11 22:58:38.592423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.487 [2024-10-11 22:58:38.592637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.487 [2024-10-11 22:58:38.592657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.487 [2024-10-11 22:58:38.592670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.487 [2024-10-11 22:58:38.595537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.488 [2024-10-11 22:58:38.604986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.488 [2024-10-11 22:58:38.605452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.488 [2024-10-11 22:58:38.605506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.488 [2024-10-11 22:58:38.605523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.488 [2024-10-11 22:58:38.605777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.488 [2024-10-11 22:58:38.606020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.488 [2024-10-11 22:58:38.606043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.488 [2024-10-11 22:58:38.606057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.488 [2024-10-11 22:58:38.609461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.488 [2024-10-11 22:58:38.618182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.488 [2024-10-11 22:58:38.618632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.488 [2024-10-11 22:58:38.618662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.488 [2024-10-11 22:58:38.618678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.488 [2024-10-11 22:58:38.618920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.488 [2024-10-11 22:58:38.619121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.488 [2024-10-11 22:58:38.619141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.488 [2024-10-11 22:58:38.619154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.488 [2024-10-11 22:58:38.622034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.488 [2024-10-11 22:58:38.631287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.488 [2024-10-11 22:58:38.631634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.488 [2024-10-11 22:58:38.631663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.488 [2024-10-11 22:58:38.631679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.488 [2024-10-11 22:58:38.631914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.488 [2024-10-11 22:58:38.632117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.488 [2024-10-11 22:58:38.632137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.488 [2024-10-11 22:58:38.632150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.488 [2024-10-11 22:58:38.635014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.488 [2024-10-11 22:58:38.644304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.488 [2024-10-11 22:58:38.644707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.488 [2024-10-11 22:58:38.644735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.488 [2024-10-11 22:58:38.644751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.488 [2024-10-11 22:58:38.644983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.488 [2024-10-11 22:58:38.645187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.488 [2024-10-11 22:58:38.645207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.488 [2024-10-11 22:58:38.645220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.488 [2024-10-11 22:58:38.648118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.488 [2024-10-11 22:58:38.657506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.488 [2024-10-11 22:58:38.657970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.488 [2024-10-11 22:58:38.658000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.488 [2024-10-11 22:58:38.658016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.488 [2024-10-11 22:58:38.658262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.488 [2024-10-11 22:58:38.658451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.488 [2024-10-11 22:58:38.658472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.488 [2024-10-11 22:58:38.658484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.488 [2024-10-11 22:58:38.661308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.488 [2024-10-11 22:58:38.670548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.488 [2024-10-11 22:58:38.670897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.488 [2024-10-11 22:58:38.670923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.488 [2024-10-11 22:58:38.670938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.488 [2024-10-11 22:58:38.671166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.488 [2024-10-11 22:58:38.671368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.488 [2024-10-11 22:58:38.671389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.488 [2024-10-11 22:58:38.671401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.488 [2024-10-11 22:58:38.674264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.488 [2024-10-11 22:58:38.683691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.488 [2024-10-11 22:58:38.684050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.488 [2024-10-11 22:58:38.684076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.488 [2024-10-11 22:58:38.684091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.488 [2024-10-11 22:58:38.684305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.488 [2024-10-11 22:58:38.684507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.488 [2024-10-11 22:58:38.684527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.488 [2024-10-11 22:58:38.684539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.488 [2024-10-11 22:58:38.687406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.488 [2024-10-11 22:58:38.696844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.488 [2024-10-11 22:58:38.697246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.488 [2024-10-11 22:58:38.697273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.488 [2024-10-11 22:58:38.697289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.488 [2024-10-11 22:58:38.697505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.488 [2024-10-11 22:58:38.697718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.488 [2024-10-11 22:58:38.697738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.488 [2024-10-11 22:58:38.697751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.488 [2024-10-11 22:58:38.700582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.488 [2024-10-11 22:58:38.710001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.488 [2024-10-11 22:58:38.710401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.488 [2024-10-11 22:58:38.710453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.488 [2024-10-11 22:58:38.710469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.488 [2024-10-11 22:58:38.710726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.488 [2024-10-11 22:58:38.710930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.488 [2024-10-11 22:58:38.710950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.488 [2024-10-11 22:58:38.710962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.488 [2024-10-11 22:58:38.713831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.488 [2024-10-11 22:58:38.723054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.488 [2024-10-11 22:58:38.723400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.488 [2024-10-11 22:58:38.723429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.488 [2024-10-11 22:58:38.723446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.488 [2024-10-11 22:58:38.723717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.488 [2024-10-11 22:58:38.723912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.488 [2024-10-11 22:58:38.723933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.488 [2024-10-11 22:58:38.723946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.488 [2024-10-11 22:58:38.726802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.488 [2024-10-11 22:58:38.736217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.488 [2024-10-11 22:58:38.736623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.488 [2024-10-11 22:58:38.736660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.488 [2024-10-11 22:58:38.736677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.488 [2024-10-11 22:58:38.736915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.488 [2024-10-11 22:58:38.737116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.488 [2024-10-11 22:58:38.737136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.488 [2024-10-11 22:58:38.737149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.489 [2024-10-11 22:58:38.739949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.489 [2024-10-11 22:58:38.749317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.489 [2024-10-11 22:58:38.749654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.489 [2024-10-11 22:58:38.749682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.489 [2024-10-11 22:58:38.749697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.489 [2024-10-11 22:58:38.749911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.489 [2024-10-11 22:58:38.750127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.489 [2024-10-11 22:58:38.750146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.489 [2024-10-11 22:58:38.750173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.489 [2024-10-11 22:58:38.753485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.748 [2024-10-11 22:58:38.762645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.748 [2024-10-11 22:58:38.763034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.748 [2024-10-11 22:58:38.763063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.748 [2024-10-11 22:58:38.763079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.748 [2024-10-11 22:58:38.763314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.748 [2024-10-11 22:58:38.763518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.748 [2024-10-11 22:58:38.763563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.748 [2024-10-11 22:58:38.763579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.748 [2024-10-11 22:58:38.766454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.748 [2024-10-11 22:58:38.775622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.748 [2024-10-11 22:58:38.775932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.748 [2024-10-11 22:58:38.775960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.748 [2024-10-11 22:58:38.775976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.748 [2024-10-11 22:58:38.776193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.748 [2024-10-11 22:58:38.776403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.748 [2024-10-11 22:58:38.776424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.748 [2024-10-11 22:58:38.776436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.748 [2024-10-11 22:58:38.779298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.748 [2024-10-11 22:58:38.788722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.748 [2024-10-11 22:58:38.789079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.748 [2024-10-11 22:58:38.789108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.748 [2024-10-11 22:58:38.789124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.748 [2024-10-11 22:58:38.789338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.748 [2024-10-11 22:58:38.789567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.748 [2024-10-11 22:58:38.789589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.748 [2024-10-11 22:58:38.789601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.748 [2024-10-11 22:58:38.792355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.748 [2024-10-11 22:58:38.801917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.748 [2024-10-11 22:58:38.802225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.748 [2024-10-11 22:58:38.802252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.748 [2024-10-11 22:58:38.802267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.748 [2024-10-11 22:58:38.802482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.748 [2024-10-11 22:58:38.802712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.748 [2024-10-11 22:58:38.802734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.748 [2024-10-11 22:58:38.802746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.748 [2024-10-11 22:58:38.805643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.748 [2024-10-11 22:58:38.814942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.748 [2024-10-11 22:58:38.815292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.748 [2024-10-11 22:58:38.815320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.748 [2024-10-11 22:58:38.815335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.748 [2024-10-11 22:58:38.815582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.748 [2024-10-11 22:58:38.815784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.748 [2024-10-11 22:58:38.815803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.748 [2024-10-11 22:58:38.815815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.748 [2024-10-11 22:58:38.818651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.748 [2024-10-11 22:58:38.828084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.748 [2024-10-11 22:58:38.828426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.748 [2024-10-11 22:58:38.828453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.748 [2024-10-11 22:58:38.828469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.748 [2024-10-11 22:58:38.828730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.748 [2024-10-11 22:58:38.828949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.748 [2024-10-11 22:58:38.828969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.748 [2024-10-11 22:58:38.828982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.748 [2024-10-11 22:58:38.831823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.749 [2024-10-11 22:58:38.841239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.749 [2024-10-11 22:58:38.841582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.749 [2024-10-11 22:58:38.841610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.749 [2024-10-11 22:58:38.841625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.749 [2024-10-11 22:58:38.841840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.749 [2024-10-11 22:58:38.842043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.749 [2024-10-11 22:58:38.842061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.749 [2024-10-11 22:58:38.842073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.749 [2024-10-11 22:58:38.844938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.749 [2024-10-11 22:58:38.854332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.749 [2024-10-11 22:58:38.854707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.749 [2024-10-11 22:58:38.854736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.749 [2024-10-11 22:58:38.854752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.749 [2024-10-11 22:58:38.854997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.749 [2024-10-11 22:58:38.855200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.749 [2024-10-11 22:58:38.855221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.749 [2024-10-11 22:58:38.855233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.749 [2024-10-11 22:58:38.858649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.749 [2024-10-11 22:58:38.867667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.749 [2024-10-11 22:58:38.868068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.749 [2024-10-11 22:58:38.868096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.749 [2024-10-11 22:58:38.868117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.749 [2024-10-11 22:58:38.868350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.749 [2024-10-11 22:58:38.868581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.749 [2024-10-11 22:58:38.868603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.749 [2024-10-11 22:58:38.868617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.749 [2024-10-11 22:58:38.871475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.749 [2024-10-11 22:58:38.880981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.749 [2024-10-11 22:58:38.881382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.749 [2024-10-11 22:58:38.881410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.749 [2024-10-11 22:58:38.881427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.749 [2024-10-11 22:58:38.881691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.749 [2024-10-11 22:58:38.881926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.749 [2024-10-11 22:58:38.881946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.749 [2024-10-11 22:58:38.881960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.749 [2024-10-11 22:58:38.884961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.749 [2024-10-11 22:58:38.894326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.749 [2024-10-11 22:58:38.894678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.749 [2024-10-11 22:58:38.894709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.749 [2024-10-11 22:58:38.894726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.749 [2024-10-11 22:58:38.894972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.749 [2024-10-11 22:58:38.895181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.749 [2024-10-11 22:58:38.895202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.749 [2024-10-11 22:58:38.895216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.749 [2024-10-11 22:58:38.898345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.749 [2024-10-11 22:58:38.907606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.749 [2024-10-11 22:58:38.908007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.749 [2024-10-11 22:58:38.908035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.749 [2024-10-11 22:58:38.908052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.749 [2024-10-11 22:58:38.908285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.749 [2024-10-11 22:58:38.908489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.749 [2024-10-11 22:58:38.908513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.749 [2024-10-11 22:58:38.908526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.749 [2024-10-11 22:58:38.911575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.749 [2024-10-11 22:58:38.921116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.749 [2024-10-11 22:58:38.921609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.749 [2024-10-11 22:58:38.921649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.749 [2024-10-11 22:58:38.921665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.749 [2024-10-11 22:58:38.921905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.749 [2024-10-11 22:58:38.922109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.749 [2024-10-11 22:58:38.922129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.749 [2024-10-11 22:58:38.922142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.749 [2024-10-11 22:58:38.925156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.749 [2024-10-11 22:58:38.934369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.749 [2024-10-11 22:58:38.934745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.749 [2024-10-11 22:58:38.934773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.749 [2024-10-11 22:58:38.934789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.749 [2024-10-11 22:58:38.935031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.749 [2024-10-11 22:58:38.935219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.749 [2024-10-11 22:58:38.935239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.749 [2024-10-11 22:58:38.935251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.749 [2024-10-11 22:58:38.938125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.749 [2024-10-11 22:58:38.947545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.749 [2024-10-11 22:58:38.947920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.749 [2024-10-11 22:58:38.947947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.749 [2024-10-11 22:58:38.947963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.749 [2024-10-11 22:58:38.948198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.749 [2024-10-11 22:58:38.948386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.749 [2024-10-11 22:58:38.948405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.749 [2024-10-11 22:58:38.948417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.749 [2024-10-11 22:58:38.951281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.749 [2024-10-11 22:58:38.960727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.749 [2024-10-11 22:58:38.961027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.749 [2024-10-11 22:58:38.961068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.749 [2024-10-11 22:58:38.961083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.749 [2024-10-11 22:58:38.961278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.749 [2024-10-11 22:58:38.961497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.749 [2024-10-11 22:58:38.961518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.749 [2024-10-11 22:58:38.961544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.749 [2024-10-11 22:58:38.964400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.749 [2024-10-11 22:58:38.973864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.749 [2024-10-11 22:58:38.974205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.749 [2024-10-11 22:58:38.974232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.749 [2024-10-11 22:58:38.974248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.749 [2024-10-11 22:58:38.974463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.749 [2024-10-11 22:58:38.974705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.749 [2024-10-11 22:58:38.974726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.749 [2024-10-11 22:58:38.974738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.750 [2024-10-11 22:58:38.977588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.750 [2024-10-11 22:58:38.986997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.750 [2024-10-11 22:58:38.987341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.750 [2024-10-11 22:58:38.987369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.750 [2024-10-11 22:58:38.987385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.750 [2024-10-11 22:58:38.987630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.750 [2024-10-11 22:58:38.987853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.750 [2024-10-11 22:58:38.987872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.750 [2024-10-11 22:58:38.987884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.750 [2024-10-11 22:58:38.990733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.750 [2024-10-11 22:58:39.000168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.750 [2024-10-11 22:58:39.000511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.750 [2024-10-11 22:58:39.000539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.750 [2024-10-11 22:58:39.000581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.750 [2024-10-11 22:58:39.000840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.750 [2024-10-11 22:58:39.001044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.750 [2024-10-11 22:58:39.001063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.750 [2024-10-11 22:58:39.001075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.750 [2024-10-11 22:58:39.003933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.750 [2024-10-11 22:58:39.013720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.750 [2024-10-11 22:58:39.014113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.750 [2024-10-11 22:58:39.014185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:35.750 [2024-10-11 22:58:39.014201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:35.750 [2024-10-11 22:58:39.014435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:35.750 [2024-10-11 22:58:39.014667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.750 [2024-10-11 22:58:39.014688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.750 [2024-10-11 22:58:39.014701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.009 [2024-10-11 22:58:39.017882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.009 [2024-10-11 22:58:39.026866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.009 [2024-10-11 22:58:39.027275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.009 [2024-10-11 22:58:39.027329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.009 [2024-10-11 22:58:39.027345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.009 [2024-10-11 22:58:39.027604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.009 [2024-10-11 22:58:39.027810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.009 [2024-10-11 22:58:39.027859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.009 [2024-10-11 22:58:39.027871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.009 [2024-10-11 22:58:39.030698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.009 [2024-10-11 22:58:39.040090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.009 [2024-10-11 22:58:39.040479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.009 [2024-10-11 22:58:39.040535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.009 [2024-10-11 22:58:39.040557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.009 [2024-10-11 22:58:39.040821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.009 [2024-10-11 22:58:39.041024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.009 [2024-10-11 22:58:39.041045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.009 [2024-10-11 22:58:39.041063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.009 [2024-10-11 22:58:39.043928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.009 [2024-10-11 22:58:39.053131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.009 [2024-10-11 22:58:39.053475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.009 [2024-10-11 22:58:39.053502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.009 [2024-10-11 22:58:39.053518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.009 [2024-10-11 22:58:39.053776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.009 [2024-10-11 22:58:39.053997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.009 [2024-10-11 22:58:39.054017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.009 [2024-10-11 22:58:39.054029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.009 [2024-10-11 22:58:39.056889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.009 [2024-10-11 22:58:39.066305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.009 [2024-10-11 22:58:39.066707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.009 [2024-10-11 22:58:39.066736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.009 [2024-10-11 22:58:39.066752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.009 [2024-10-11 22:58:39.066985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.009 [2024-10-11 22:58:39.067189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.009 [2024-10-11 22:58:39.067209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.009 [2024-10-11 22:58:39.067222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.009 [2024-10-11 22:58:39.070082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.009 [2024-10-11 22:58:39.079369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.009 [2024-10-11 22:58:39.079720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.009 [2024-10-11 22:58:39.079749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.009 [2024-10-11 22:58:39.079765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.009 [2024-10-11 22:58:39.079999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.009 [2024-10-11 22:58:39.080203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.009 [2024-10-11 22:58:39.080223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.009 [2024-10-11 22:58:39.080236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.009 [2024-10-11 22:58:39.083100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.009 [2024-10-11 22:58:39.092496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.009 [2024-10-11 22:58:39.092932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.009 [2024-10-11 22:58:39.092961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.009 [2024-10-11 22:58:39.092976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.009 [2024-10-11 22:58:39.093203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.009 [2024-10-11 22:58:39.093408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.009 [2024-10-11 22:58:39.093428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.009 [2024-10-11 22:58:39.093440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.009 [2024-10-11 22:58:39.096287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.009 [2024-10-11 22:58:39.105600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.009 [2024-10-11 22:58:39.105952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.010 [2024-10-11 22:58:39.105983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.010 [2024-10-11 22:58:39.105999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.010 [2024-10-11 22:58:39.106245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.010 [2024-10-11 22:58:39.106459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.010 [2024-10-11 22:58:39.106479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.010 [2024-10-11 22:58:39.106492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.010 [2024-10-11 22:58:39.109975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.010 [2024-10-11 22:58:39.118735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.010 [2024-10-11 22:58:39.119094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.010 [2024-10-11 22:58:39.119122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.010 [2024-10-11 22:58:39.119137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.010 [2024-10-11 22:58:39.119352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.010 [2024-10-11 22:58:39.119582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.010 [2024-10-11 22:58:39.119609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.010 [2024-10-11 22:58:39.119632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.010 [2024-10-11 22:58:39.122468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.010 [2024-10-11 22:58:39.132210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.010 [2024-10-11 22:58:39.132564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.010 [2024-10-11 22:58:39.132609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.010 [2024-10-11 22:58:39.132626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.010 [2024-10-11 22:58:39.132841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.010 [2024-10-11 22:58:39.133059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.010 [2024-10-11 22:58:39.133079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.010 [2024-10-11 22:58:39.133092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.010 [2024-10-11 22:58:39.136135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.010 [2024-10-11 22:58:39.145590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.010 [2024-10-11 22:58:39.145940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.010 [2024-10-11 22:58:39.145968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.010 [2024-10-11 22:58:39.145984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.010 [2024-10-11 22:58:39.146200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.010 [2024-10-11 22:58:39.146409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.010 [2024-10-11 22:58:39.146429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.010 [2024-10-11 22:58:39.146442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.010 [2024-10-11 22:58:39.149488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.010 [2024-10-11 22:58:39.158906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.010 [2024-10-11 22:58:39.159253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.010 [2024-10-11 22:58:39.159282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.010 [2024-10-11 22:58:39.159299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.010 [2024-10-11 22:58:39.159535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.010 [2024-10-11 22:58:39.159760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.010 [2024-10-11 22:58:39.159782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.010 [2024-10-11 22:58:39.159795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.010 [2024-10-11 22:58:39.162793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.010 [2024-10-11 22:58:39.172140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.010 [2024-10-11 22:58:39.172491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.010 [2024-10-11 22:58:39.172520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.010 [2024-10-11 22:58:39.172536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.010 [2024-10-11 22:58:39.172788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.010 [2024-10-11 22:58:39.173017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.010 [2024-10-11 22:58:39.173038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.010 [2024-10-11 22:58:39.173051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.010 [2024-10-11 22:58:39.176063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.010 [2024-10-11 22:58:39.185395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.010 [2024-10-11 22:58:39.185830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.010 [2024-10-11 22:58:39.185875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.010 [2024-10-11 22:58:39.185891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.010 [2024-10-11 22:58:39.186132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.010 [2024-10-11 22:58:39.186356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.010 [2024-10-11 22:58:39.186377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.010 [2024-10-11 22:58:39.186390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.010 [2024-10-11 22:58:39.189438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.010 [2024-10-11 22:58:39.198525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.010 [2024-10-11 22:58:39.198958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.010 [2024-10-11 22:58:39.198987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.010 [2024-10-11 22:58:39.199019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.010 [2024-10-11 22:58:39.199246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.010 [2024-10-11 22:58:39.199439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.010 [2024-10-11 22:58:39.199457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.010 [2024-10-11 22:58:39.199469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.010 [2024-10-11 22:58:39.202473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.010 [2024-10-11 22:58:39.211745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.010 [2024-10-11 22:58:39.212112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.010 [2024-10-11 22:58:39.212154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.010 [2024-10-11 22:58:39.212170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.010 [2024-10-11 22:58:39.212419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.010 [2024-10-11 22:58:39.212659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.010 [2024-10-11 22:58:39.212680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.010 [2024-10-11 22:58:39.212693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.010 [2024-10-11 22:58:39.215671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.010 [2024-10-11 22:58:39.224983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.010 [2024-10-11 22:58:39.225421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.010 [2024-10-11 22:58:39.225465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.010 [2024-10-11 22:58:39.225486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.010 [2024-10-11 22:58:39.225753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.010 [2024-10-11 22:58:39.225981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.010 [2024-10-11 22:58:39.226000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.010 [2024-10-11 22:58:39.226012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.010 [2024-10-11 22:58:39.228894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.010 [2024-10-11 22:58:39.238266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.010 [2024-10-11 22:58:39.238636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.010 [2024-10-11 22:58:39.238681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.010 [2024-10-11 22:58:39.238698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.010 [2024-10-11 22:58:39.238967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.010 [2024-10-11 22:58:39.239165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.010 [2024-10-11 22:58:39.239184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.010 [2024-10-11 22:58:39.239197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.010 [2024-10-11 22:58:39.242167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.010 [2024-10-11 22:58:39.251600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.010 [2024-10-11 22:58:39.251967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.011 [2024-10-11 22:58:39.252010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.011 [2024-10-11 22:58:39.252026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.011 [2024-10-11 22:58:39.252278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.011 [2024-10-11 22:58:39.252475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.011 [2024-10-11 22:58:39.252494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.011 [2024-10-11 22:58:39.252506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.011 [2024-10-11 22:58:39.255428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.011 [2024-10-11 22:58:39.264706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.011 [2024-10-11 22:58:39.265057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.011 [2024-10-11 22:58:39.265099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.011 [2024-10-11 22:58:39.265114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.011 [2024-10-11 22:58:39.265359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.011 [2024-10-11 22:58:39.265596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.011 [2024-10-11 22:58:39.265616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.011 [2024-10-11 22:58:39.265628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.011 [2024-10-11 22:58:39.268488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.270 [2024-10-11 22:58:39.278268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.270 [2024-10-11 22:58:39.278756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.270 [2024-10-11 22:58:39.278798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.270 [2024-10-11 22:58:39.278814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.270 [2024-10-11 22:58:39.279041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.270 [2024-10-11 22:58:39.279233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.270 [2024-10-11 22:58:39.279251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.270 [2024-10-11 22:58:39.279262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.270 [2024-10-11 22:58:39.282454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.270 [2024-10-11 22:58:39.291430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.270 [2024-10-11 22:58:39.291799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.270 [2024-10-11 22:58:39.291842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.270 [2024-10-11 22:58:39.291858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.270 [2024-10-11 22:58:39.292085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.270 [2024-10-11 22:58:39.292277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.270 [2024-10-11 22:58:39.292295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.270 [2024-10-11 22:58:39.292307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.270 [2024-10-11 22:58:39.295118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.270 [2024-10-11 22:58:39.304527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.270 [2024-10-11 22:58:39.305025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.270 [2024-10-11 22:58:39.305076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.270 [2024-10-11 22:58:39.305092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.270 [2024-10-11 22:58:39.305352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.270 [2024-10-11 22:58:39.305544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.270 [2024-10-11 22:58:39.305572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.270 [2024-10-11 22:58:39.305584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.270 [2024-10-11 22:58:39.308424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.270 [2024-10-11 22:58:39.317883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.270 [2024-10-11 22:58:39.318261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.270 [2024-10-11 22:58:39.318289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.270 [2024-10-11 22:58:39.318304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.270 [2024-10-11 22:58:39.318558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.270 [2024-10-11 22:58:39.318777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.270 [2024-10-11 22:58:39.318797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.270 [2024-10-11 22:58:39.318809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.270 [2024-10-11 22:58:39.321769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.270 [2024-10-11 22:58:39.331074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.270 [2024-10-11 22:58:39.331476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.270 [2024-10-11 22:58:39.331504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.270 [2024-10-11 22:58:39.331519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.270 [2024-10-11 22:58:39.331787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.270 [2024-10-11 22:58:39.332017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.270 [2024-10-11 22:58:39.332036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.270 [2024-10-11 22:58:39.332047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.270 [2024-10-11 22:58:39.334989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.270 [2024-10-11 22:58:39.344275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.270 [2024-10-11 22:58:39.344634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.270 [2024-10-11 22:58:39.344662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.270 [2024-10-11 22:58:39.344677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.270 [2024-10-11 22:58:39.344897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.270 [2024-10-11 22:58:39.345104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.270 [2024-10-11 22:58:39.345122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.270 [2024-10-11 22:58:39.345134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.270 [2024-10-11 22:58:39.348019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.270 [2024-10-11 22:58:39.357373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.270 [2024-10-11 22:58:39.357801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.270 [2024-10-11 22:58:39.357843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.270 [2024-10-11 22:58:39.357863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.270 [2024-10-11 22:58:39.358121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.270 [2024-10-11 22:58:39.358354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.270 [2024-10-11 22:58:39.358375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.270 [2024-10-11 22:58:39.358388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.270 [2024-10-11 22:58:39.361921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.270 [2024-10-11 22:58:39.370670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.270 [2024-10-11 22:58:39.371088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.270 [2024-10-11 22:58:39.371137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.270 [2024-10-11 22:58:39.371153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.270 [2024-10-11 22:58:39.371412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.270 [2024-10-11 22:58:39.371631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.270 [2024-10-11 22:58:39.371651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.270 [2024-10-11 22:58:39.371664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.270 [2024-10-11 22:58:39.374626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.270 [2024-10-11 22:58:39.383882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.270 [2024-10-11 22:58:39.384259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.270 [2024-10-11 22:58:39.384287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.270 [2024-10-11 22:58:39.384303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.270 [2024-10-11 22:58:39.384542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.270 [2024-10-11 22:58:39.384758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.271 [2024-10-11 22:58:39.384776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.271 [2024-10-11 22:58:39.384788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.271 [2024-10-11 22:58:39.387648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.271 [2024-10-11 22:58:39.397067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.271 [2024-10-11 22:58:39.397399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.271 [2024-10-11 22:58:39.397425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.271 [2024-10-11 22:58:39.397440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.271 [2024-10-11 22:58:39.397672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.271 [2024-10-11 22:58:39.397879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.271 [2024-10-11 22:58:39.397904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.271 [2024-10-11 22:58:39.397916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.271 [2024-10-11 22:58:39.400680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.271 [2024-10-11 22:58:39.410085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.271 [2024-10-11 22:58:39.410445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.271 [2024-10-11 22:58:39.410487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.271 [2024-10-11 22:58:39.410502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.271 [2024-10-11 22:58:39.410757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.271 [2024-10-11 22:58:39.410967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.271 [2024-10-11 22:58:39.410985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.271 [2024-10-11 22:58:39.410997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.271 [2024-10-11 22:58:39.413763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.271 [2024-10-11 22:58:39.423043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.271 [2024-10-11 22:58:39.423437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.271 [2024-10-11 22:58:39.423464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.271 [2024-10-11 22:58:39.423479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.271 [2024-10-11 22:58:39.423735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.271 [2024-10-11 22:58:39.423962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.271 [2024-10-11 22:58:39.423981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.271 [2024-10-11 22:58:39.423993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.271 [2024-10-11 22:58:39.426886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.271 [2024-10-11 22:58:39.436164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.271 [2024-10-11 22:58:39.436536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.271 [2024-10-11 22:58:39.436570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.271 [2024-10-11 22:58:39.436586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.271 [2024-10-11 22:58:39.436820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.271 [2024-10-11 22:58:39.437028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.271 [2024-10-11 22:58:39.437047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.271 [2024-10-11 22:58:39.437058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.271 [2024-10-11 22:58:39.439941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.271 [2024-10-11 22:58:39.449368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.271 [2024-10-11 22:58:39.449741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.271 [2024-10-11 22:58:39.449768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.271 [2024-10-11 22:58:39.449784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.271 [2024-10-11 22:58:39.450018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.271 [2024-10-11 22:58:39.450209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.271 [2024-10-11 22:58:39.450227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.271 [2024-10-11 22:58:39.450238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.271 [2024-10-11 22:58:39.453004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.271 [2024-10-11 22:58:39.462400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.271 [2024-10-11 22:58:39.462720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.271 [2024-10-11 22:58:39.462746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.271 [2024-10-11 22:58:39.462761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.271 [2024-10-11 22:58:39.462960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.271 [2024-10-11 22:58:39.463184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.271 [2024-10-11 22:58:39.463203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.271 [2024-10-11 22:58:39.463214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.271 [2024-10-11 22:58:39.465981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.271 4504.40 IOPS, 17.60 MiB/s [2024-10-11T20:58:39.539Z] [2024-10-11 22:58:39.476723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.271 [2024-10-11 22:58:39.477085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.271 [2024-10-11 22:58:39.477112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.271 [2024-10-11 22:58:39.477127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.271 [2024-10-11 22:58:39.477360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.271 [2024-10-11 22:58:39.477577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.271 [2024-10-11 22:58:39.477597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.271 [2024-10-11 22:58:39.477608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.271 [2024-10-11 22:58:39.480393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.271 [2024-10-11 22:58:39.489794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.271 [2024-10-11 22:58:39.490157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.271 [2024-10-11 22:58:39.490198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.271 [2024-10-11 22:58:39.490214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.271 [2024-10-11 22:58:39.490470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.271 [2024-10-11 22:58:39.490724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.271 [2024-10-11 22:58:39.490745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.271 [2024-10-11 22:58:39.490757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.271 [2024-10-11 22:58:39.493653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.271 [2024-10-11 22:58:39.502903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.271 [2024-10-11 22:58:39.503230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.271 [2024-10-11 22:58:39.503257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.271 [2024-10-11 22:58:39.503272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.271 [2024-10-11 22:58:39.503493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.271 [2024-10-11 22:58:39.503730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.271 [2024-10-11 22:58:39.503750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.271 [2024-10-11 22:58:39.503761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.271 [2024-10-11 22:58:39.506641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.271 [2024-10-11 22:58:39.516050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.271 [2024-10-11 22:58:39.516526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.271 [2024-10-11 22:58:39.516580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.271 [2024-10-11 22:58:39.516595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.271 [2024-10-11 22:58:39.516853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.271 [2024-10-11 22:58:39.517045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.271 [2024-10-11 22:58:39.517063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.271 [2024-10-11 22:58:39.517074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.271 [2024-10-11 22:58:39.519841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.271 [2024-10-11 22:58:39.529091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.271 [2024-10-11 22:58:39.529456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.271 [2024-10-11 22:58:39.529498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.271 [2024-10-11 22:58:39.529514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.272 [2024-10-11 22:58:39.529791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.272 [2024-10-11 22:58:39.529999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.272 [2024-10-11 22:58:39.530018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.272 [2024-10-11 22:58:39.530034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.272 [2024-10-11 22:58:39.532802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.531 [2024-10-11 22:58:39.542508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.531 [2024-10-11 22:58:39.542918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-11 22:58:39.542976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.531 [2024-10-11 22:58:39.542992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.531 [2024-10-11 22:58:39.543232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.531 [2024-10-11 22:58:39.543440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.531 [2024-10-11 22:58:39.543459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.531 [2024-10-11 22:58:39.543470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.531 [2024-10-11 22:58:39.546450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.531 [2024-10-11 22:58:39.555599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.531 [2024-10-11 22:58:39.555940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-11 22:58:39.555980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.531 [2024-10-11 22:58:39.555995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.531 [2024-10-11 22:58:39.556236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.531 [2024-10-11 22:58:39.556427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.531 [2024-10-11 22:58:39.556445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.531 [2024-10-11 22:58:39.556457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.531 [2024-10-11 22:58:39.559225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.531 [2024-10-11 22:58:39.568620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.531 [2024-10-11 22:58:39.568925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-11 22:58:39.568966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.531 [2024-10-11 22:58:39.568981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.531 [2024-10-11 22:58:39.569180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.531 [2024-10-11 22:58:39.569403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.531 [2024-10-11 22:58:39.569422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.531 [2024-10-11 22:58:39.569433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.531 [2024-10-11 22:58:39.572200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.531 [2024-10-11 22:58:39.581641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.531 [2024-10-11 22:58:39.582005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-11 22:58:39.582046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.531 [2024-10-11 22:58:39.582061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.531 [2024-10-11 22:58:39.582301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.531 [2024-10-11 22:58:39.582493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.531 [2024-10-11 22:58:39.582511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.531 [2024-10-11 22:58:39.582522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.531 [2024-10-11 22:58:39.585406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.531 [2024-10-11 22:58:39.594831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.531 [2024-10-11 22:58:39.595234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-11 22:58:39.595261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.531 [2024-10-11 22:58:39.595275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.531 [2024-10-11 22:58:39.595497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.531 [2024-10-11 22:58:39.595732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.531 [2024-10-11 22:58:39.595752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.531 [2024-10-11 22:58:39.595764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.531 [2024-10-11 22:58:39.598641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.531 [2024-10-11 22:58:39.608054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.531 [2024-10-11 22:58:39.608391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-11 22:58:39.608434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.531 [2024-10-11 22:58:39.608450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.531 [2024-10-11 22:58:39.608698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.531 [2024-10-11 22:58:39.608945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.531 [2024-10-11 22:58:39.608966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.531 [2024-10-11 22:58:39.608980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.531 [2024-10-11 22:58:39.612419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.531 [2024-10-11 22:58:39.621270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.531 [2024-10-11 22:58:39.621662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-11 22:58:39.621689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.531 [2024-10-11 22:58:39.621704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.531 [2024-10-11 22:58:39.621925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.531 [2024-10-11 22:58:39.622155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.531 [2024-10-11 22:58:39.622174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.531 [2024-10-11 22:58:39.622186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.531 [2024-10-11 22:58:39.625149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.531 [2024-10-11 22:58:39.634431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.531 [2024-10-11 22:58:39.634930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-11 22:58:39.634972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.531 [2024-10-11 22:58:39.634989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.531 [2024-10-11 22:58:39.635237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.531 [2024-10-11 22:58:39.635444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.531 [2024-10-11 22:58:39.635462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.531 [2024-10-11 22:58:39.635473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.531 [2024-10-11 22:58:39.638245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.531 [2024-10-11 22:58:39.647485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.531 [2024-10-11 22:58:39.647846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-11 22:58:39.647874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.531 [2024-10-11 22:58:39.647904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.531 [2024-10-11 22:58:39.648142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.531 [2024-10-11 22:58:39.648334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.532 [2024-10-11 22:58:39.648352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.532 [2024-10-11 22:58:39.648363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.532 [2024-10-11 22:58:39.651212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.532 [2024-10-11 22:58:39.660628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.532 [2024-10-11 22:58:39.660944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-11 22:58:39.660971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.532 [2024-10-11 22:58:39.660986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.532 [2024-10-11 22:58:39.661207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.532 [2024-10-11 22:58:39.661415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.532 [2024-10-11 22:58:39.661434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.532 [2024-10-11 22:58:39.661445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.532 [2024-10-11 22:58:39.664253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.532 [2024-10-11 22:58:39.673658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.532 [2024-10-11 22:58:39.674131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-11 22:58:39.674184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.532 [2024-10-11 22:58:39.674199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.532 [2024-10-11 22:58:39.674464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.532 [2024-10-11 22:58:39.674682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.532 [2024-10-11 22:58:39.674702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.532 [2024-10-11 22:58:39.674714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.532 [2024-10-11 22:58:39.677472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.532 [2024-10-11 22:58:39.686773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.532 [2024-10-11 22:58:39.687154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-11 22:58:39.687220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.532 [2024-10-11 22:58:39.687235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.532 [2024-10-11 22:58:39.687482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.532 [2024-10-11 22:58:39.687717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.532 [2024-10-11 22:58:39.687737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.532 [2024-10-11 22:58:39.687748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.532 [2024-10-11 22:58:39.690626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.532 [2024-10-11 22:58:39.699861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.532 [2024-10-11 22:58:39.700206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-11 22:58:39.700232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.532 [2024-10-11 22:58:39.700248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.532 [2024-10-11 22:58:39.700482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.532 [2024-10-11 22:58:39.700717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.532 [2024-10-11 22:58:39.700737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.532 [2024-10-11 22:58:39.700749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.532 [2024-10-11 22:58:39.703627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.532 [2024-10-11 22:58:39.713040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.532 [2024-10-11 22:58:39.713399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-11 22:58:39.713425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.532 [2024-10-11 22:58:39.713445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.532 [2024-10-11 22:58:39.713689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.532 [2024-10-11 22:58:39.713897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.532 [2024-10-11 22:58:39.713916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.532 [2024-10-11 22:58:39.713927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.532 [2024-10-11 22:58:39.716689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.532 [2024-10-11 22:58:39.726155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.532 [2024-10-11 22:58:39.726565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-11 22:58:39.726606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.532 [2024-10-11 22:58:39.726622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.532 [2024-10-11 22:58:39.726855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.532 [2024-10-11 22:58:39.727046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.532 [2024-10-11 22:58:39.727064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.532 [2024-10-11 22:58:39.727076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.532 [2024-10-11 22:58:39.729958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.532 [2024-10-11 22:58:39.739404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.532 [2024-10-11 22:58:39.739837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-11 22:58:39.739878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.532 [2024-10-11 22:58:39.739893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.532 [2024-10-11 22:58:39.740140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.532 [2024-10-11 22:58:39.740347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.532 [2024-10-11 22:58:39.740365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.532 [2024-10-11 22:58:39.740377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.532 [2024-10-11 22:58:39.743181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.532 [2024-10-11 22:58:39.752421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.532 [2024-10-11 22:58:39.752849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-11 22:58:39.752899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.532 [2024-10-11 22:58:39.752915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.532 [2024-10-11 22:58:39.753176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.532 [2024-10-11 22:58:39.753373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.532 [2024-10-11 22:58:39.753391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.532 [2024-10-11 22:58:39.753403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.532 [2024-10-11 22:58:39.756172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.532 [2024-10-11 22:58:39.765449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.532 [2024-10-11 22:58:39.765951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-11 22:58:39.766004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.532 [2024-10-11 22:58:39.766019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.532 [2024-10-11 22:58:39.766278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.532 [2024-10-11 22:58:39.766469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.532 [2024-10-11 22:58:39.766487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.532 [2024-10-11 22:58:39.766499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.532 [2024-10-11 22:58:39.769380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.532 [2024-10-11 22:58:39.778630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.532 [2024-10-11 22:58:39.778956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-11 22:58:39.779020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.532 [2024-10-11 22:58:39.779035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.532 [2024-10-11 22:58:39.779261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.532 [2024-10-11 22:58:39.779453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.532 [2024-10-11 22:58:39.779471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.532 [2024-10-11 22:58:39.779483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.532 [2024-10-11 22:58:39.782364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.532 [2024-10-11 22:58:39.791610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.532 [2024-10-11 22:58:39.792085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-11 22:58:39.792137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.532 [2024-10-11 22:58:39.792152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.532 [2024-10-11 22:58:39.792391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.532 [2024-10-11 22:58:39.792593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.533 [2024-10-11 22:58:39.792612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.533 [2024-10-11 22:58:39.792624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.533 [2024-10-11 22:58:39.795680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.791 [2024-10-11 22:58:39.805185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.791 [2024-10-11 22:58:39.805496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.791 [2024-10-11 22:58:39.805521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.791 [2024-10-11 22:58:39.805536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.791 [2024-10-11 22:58:39.805773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.792 [2024-10-11 22:58:39.805982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.792 [2024-10-11 22:58:39.806000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.792 [2024-10-11 22:58:39.806012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.792 [2024-10-11 22:58:39.808930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.792 [2024-10-11 22:58:39.818360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.792 [2024-10-11 22:58:39.818839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.792 [2024-10-11 22:58:39.818891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.792 [2024-10-11 22:58:39.818906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.792 [2024-10-11 22:58:39.819169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.792 [2024-10-11 22:58:39.819360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.792 [2024-10-11 22:58:39.819378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.792 [2024-10-11 22:58:39.819390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.792 [2024-10-11 22:58:39.822153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.792 [2024-10-11 22:58:39.831480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.792 [2024-10-11 22:58:39.831866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.792 [2024-10-11 22:58:39.831908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.792 [2024-10-11 22:58:39.831923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.792 [2024-10-11 22:58:39.832149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.792 [2024-10-11 22:58:39.832340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.792 [2024-10-11 22:58:39.832359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.792 [2024-10-11 22:58:39.832370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.792 [2024-10-11 22:58:39.835218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.792 [2024-10-11 22:58:39.844507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.792 [2024-10-11 22:58:39.844984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.792 [2024-10-11 22:58:39.845033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.792 [2024-10-11 22:58:39.845053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.792 [2024-10-11 22:58:39.845315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.792 [2024-10-11 22:58:39.845507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.792 [2024-10-11 22:58:39.845525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.792 [2024-10-11 22:58:39.845536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.792 [2024-10-11 22:58:39.848416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.792 [2024-10-11 22:58:39.857681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.792 [2024-10-11 22:58:39.858091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.792 [2024-10-11 22:58:39.858118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.792 [2024-10-11 22:58:39.858150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.792 [2024-10-11 22:58:39.858390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.792 [2024-10-11 22:58:39.858604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.792 [2024-10-11 22:58:39.858625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.792 [2024-10-11 22:58:39.858638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.792 [2024-10-11 22:58:39.862142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.792 [2024-10-11 22:58:39.870955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.792 [2024-10-11 22:58:39.871336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.792 [2024-10-11 22:58:39.871378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.792 [2024-10-11 22:58:39.871394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.792 [2024-10-11 22:58:39.871627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.792 [2024-10-11 22:58:39.871847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.792 [2024-10-11 22:58:39.871866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.792 [2024-10-11 22:58:39.871878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.792 [2024-10-11 22:58:39.874868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.792 [2024-10-11 22:58:39.884172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.792 [2024-10-11 22:58:39.884600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.792 [2024-10-11 22:58:39.884629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.792 [2024-10-11 22:58:39.884646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.792 [2024-10-11 22:58:39.884886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.792 [2024-10-11 22:58:39.885093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.792 [2024-10-11 22:58:39.885116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.792 [2024-10-11 22:58:39.885128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.792 [2024-10-11 22:58:39.888054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.792 [2024-10-11 22:58:39.897280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.792 [2024-10-11 22:58:39.897643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.792 [2024-10-11 22:58:39.897670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.792 [2024-10-11 22:58:39.897686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.792 [2024-10-11 22:58:39.897904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.792 [2024-10-11 22:58:39.898111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.792 [2024-10-11 22:58:39.898129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.792 [2024-10-11 22:58:39.898140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.792 [2024-10-11 22:58:39.901026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.792 [2024-10-11 22:58:39.910651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.792 [2024-10-11 22:58:39.911048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.792 [2024-10-11 22:58:39.911088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.792 [2024-10-11 22:58:39.911104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.792 [2024-10-11 22:58:39.911338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.792 [2024-10-11 22:58:39.911576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.792 [2024-10-11 22:58:39.911598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.792 [2024-10-11 22:58:39.911611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.792 [2024-10-11 22:58:39.914626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.792 [2024-10-11 22:58:39.923968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.792 [2024-10-11 22:58:39.924327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.792 [2024-10-11 22:58:39.924354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.792 [2024-10-11 22:58:39.924369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.792 [2024-10-11 22:58:39.924602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.792 [2024-10-11 22:58:39.924820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.792 [2024-10-11 22:58:39.924855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.792 [2024-10-11 22:58:39.924868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.792 [2024-10-11 22:58:39.927961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.792 [2024-10-11 22:58:39.937208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.792 [2024-10-11 22:58:39.937701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.792 [2024-10-11 22:58:39.937730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.792 [2024-10-11 22:58:39.937745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.792 [2024-10-11 22:58:39.937985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.792 [2024-10-11 22:58:39.938176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.793 [2024-10-11 22:58:39.938194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.793 [2024-10-11 22:58:39.938206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.793 [2024-10-11 22:58:39.941172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.793 [2024-10-11 22:58:39.950504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.793 [2024-10-11 22:58:39.950887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.793 [2024-10-11 22:58:39.950929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.793 [2024-10-11 22:58:39.950945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.793 [2024-10-11 22:58:39.951211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.793 [2024-10-11 22:58:39.951403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.793 [2024-10-11 22:58:39.951421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.793 [2024-10-11 22:58:39.951432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.793 [2024-10-11 22:58:39.954376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.793 [2024-10-11 22:58:39.963675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.793 [2024-10-11 22:58:39.964038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.793 [2024-10-11 22:58:39.964079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.793 [2024-10-11 22:58:39.964095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.793 [2024-10-11 22:58:39.964316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.793 [2024-10-11 22:58:39.964524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.793 [2024-10-11 22:58:39.964542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.793 [2024-10-11 22:58:39.964580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.793 [2024-10-11 22:58:39.967440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.793 [2024-10-11 22:58:39.976910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.793 [2024-10-11 22:58:39.977318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.793 [2024-10-11 22:58:39.977359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.793 [2024-10-11 22:58:39.977374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.793 [2024-10-11 22:58:39.977628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.793 [2024-10-11 22:58:39.977826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.793 [2024-10-11 22:58:39.977845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.793 [2024-10-11 22:58:39.977857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.793 [2024-10-11 22:58:39.980637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.793 [2024-10-11 22:58:39.989872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.793 [2024-10-11 22:58:39.990229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.793 [2024-10-11 22:58:39.990270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.793 [2024-10-11 22:58:39.990286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.793 [2024-10-11 22:58:39.990532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.793 [2024-10-11 22:58:39.990732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.793 [2024-10-11 22:58:39.990751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.793 [2024-10-11 22:58:39.990763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.793 [2024-10-11 22:58:39.993522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.793 [2024-10-11 22:58:40.003155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.793 [2024-10-11 22:58:40.003521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.793 [2024-10-11 22:58:40.003571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.793 [2024-10-11 22:58:40.003590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.793 [2024-10-11 22:58:40.003843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.793 [2024-10-11 22:58:40.004050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.793 [2024-10-11 22:58:40.004068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.793 [2024-10-11 22:58:40.004080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.793 [2024-10-11 22:58:40.007073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.793 [2024-10-11 22:58:40.017177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.793 [2024-10-11 22:58:40.017596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.793 [2024-10-11 22:58:40.017637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.793 [2024-10-11 22:58:40.017662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.793 [2024-10-11 22:58:40.017950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.793 [2024-10-11 22:58:40.018194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.793 [2024-10-11 22:58:40.018220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.793 [2024-10-11 22:58:40.018248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.793 [2024-10-11 22:58:40.022011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.793 [2024-10-11 22:58:40.031210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.793 [2024-10-11 22:58:40.031651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.793 [2024-10-11 22:58:40.031687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.793 [2024-10-11 22:58:40.031713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.793 [2024-10-11 22:58:40.031992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.793 [2024-10-11 22:58:40.032235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.793 [2024-10-11 22:58:40.032263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.793 [2024-10-11 22:58:40.032298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.793 [2024-10-11 22:58:40.036073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.793 [2024-10-11 22:58:40.045233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.793 [2024-10-11 22:58:40.045716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.793 [2024-10-11 22:58:40.045756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:36.793 [2024-10-11 22:58:40.045781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:36.793 [2024-10-11 22:58:40.046074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:36.793 [2024-10-11 22:58:40.046322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.793 [2024-10-11 22:58:40.046349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.793 [2024-10-11 22:58:40.046369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.793 [2024-10-11 22:58:40.050183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.052 [2024-10-11 22:58:40.060010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.052 [2024-10-11 22:58:40.060444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.052 [2024-10-11 22:58:40.060497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.052 [2024-10-11 22:58:40.060520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.052 [2024-10-11 22:58:40.060825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.053 [2024-10-11 22:58:40.061092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.053 [2024-10-11 22:58:40.061119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.053 [2024-10-11 22:58:40.061137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.053 [2024-10-11 22:58:40.065367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.053 [2024-10-11 22:58:40.073703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.053 [2024-10-11 22:58:40.074114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.053 [2024-10-11 22:58:40.074153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.053 [2024-10-11 22:58:40.074171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.053 [2024-10-11 22:58:40.074458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.053 [2024-10-11 22:58:40.074706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.053 [2024-10-11 22:58:40.074728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.053 [2024-10-11 22:58:40.074741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.053 [2024-10-11 22:58:40.077892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 396457 Killed "${NVMF_APP[@]}" "$@" 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:37.053 [2024-10-11 22:58:40.087128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:37.053 [2024-10-11 22:58:40.087560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.053 [2024-10-11 22:58:40.087590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.053 [2024-10-11 22:58:40.087606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.053 [2024-10-11 22:58:40.087835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.053 [2024-10-11 22:58:40.088067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.053 [2024-10-11 22:58:40.088087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.053 [2024-10-11 22:58:40.088099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=397445 00:35:37.053 [2024-10-11 22:58:40.091309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 397445 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 397445 ']' 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:37.053 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.053 [2024-10-11 22:58:40.100665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.053 [2024-10-11 22:58:40.101073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.053 [2024-10-11 22:58:40.101116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.053 [2024-10-11 22:58:40.101132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.053 [2024-10-11 22:58:40.101374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.053 [2024-10-11 22:58:40.101599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.053 [2024-10-11 22:58:40.101620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.053 [2024-10-11 22:58:40.101633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.053 [2024-10-11 22:58:40.104946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.053 [2024-10-11 22:58:40.114292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.053 [2024-10-11 22:58:40.114670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.053 [2024-10-11 22:58:40.114699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.053 [2024-10-11 22:58:40.114715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.053 [2024-10-11 22:58:40.114929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.053 [2024-10-11 22:58:40.115147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.053 [2024-10-11 22:58:40.115168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.053 [2024-10-11 22:58:40.115182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.053 [2024-10-11 22:58:40.118425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.053 [2024-10-11 22:58:40.127859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.053 [2024-10-11 22:58:40.128347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.053 [2024-10-11 22:58:40.128376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.053 [2024-10-11 22:58:40.128392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.053 [2024-10-11 22:58:40.128616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.053 [2024-10-11 22:58:40.128849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.053 [2024-10-11 22:58:40.128870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.053 [2024-10-11 22:58:40.128884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.053 [2024-10-11 22:58:40.132205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.053 [2024-10-11 22:58:40.141394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.053 [2024-10-11 22:58:40.141748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.053 [2024-10-11 22:58:40.141776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.053 [2024-10-11 22:58:40.141792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.053 [2024-10-11 22:58:40.142034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.053 [2024-10-11 22:58:40.142254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.053 [2024-10-11 22:58:40.142273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.053 [2024-10-11 22:58:40.142286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.053 [2024-10-11 22:58:40.142974] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:37.053 [2024-10-11 22:58:40.143062] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.053 [2024-10-11 22:58:40.145518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.053 [2024-10-11 22:58:40.155017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.053 [2024-10-11 22:58:40.155352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.053 [2024-10-11 22:58:40.155382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.053 [2024-10-11 22:58:40.155399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.053 [2024-10-11 22:58:40.155638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.053 [2024-10-11 22:58:40.155859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.053 [2024-10-11 22:58:40.155879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.053 [2024-10-11 22:58:40.155905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.053 [2024-10-11 22:58:40.159077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.053 [2024-10-11 22:58:40.168477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.053 [2024-10-11 22:58:40.168886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.053 [2024-10-11 22:58:40.168924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.053 [2024-10-11 22:58:40.168940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.053 [2024-10-11 22:58:40.169180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.053 [2024-10-11 22:58:40.169394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.054 [2024-10-11 22:58:40.169413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.054 [2024-10-11 22:58:40.169425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.054 [2024-10-11 22:58:40.172630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.054 [2024-10-11 22:58:40.182108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.054 [2024-10-11 22:58:40.182565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.054 [2024-10-11 22:58:40.182594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.054 [2024-10-11 22:58:40.182610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.054 [2024-10-11 22:58:40.182838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.054 [2024-10-11 22:58:40.183060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.054 [2024-10-11 22:58:40.183079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.054 [2024-10-11 22:58:40.183091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.054 [2024-10-11 22:58:40.186329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.054 [2024-10-11 22:58:40.195437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.054 [2024-10-11 22:58:40.195823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.054 [2024-10-11 22:58:40.195858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.054 [2024-10-11 22:58:40.195873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.054 [2024-10-11 22:58:40.196101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.054 [2024-10-11 22:58:40.196315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.054 [2024-10-11 22:58:40.196334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.054 [2024-10-11 22:58:40.196346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.054 [2024-10-11 22:58:40.199384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.054 [2024-10-11 22:58:40.208737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.054 [2024-10-11 22:58:40.209096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.054 [2024-10-11 22:58:40.209123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.054 [2024-10-11 22:58:40.209139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.054 [2024-10-11 22:58:40.209385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.054 [2024-10-11 22:58:40.209591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.054 [2024-10-11 22:58:40.209611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.054 [2024-10-11 22:58:40.209623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.054 [2024-10-11 22:58:40.212658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.054 [2024-10-11 22:58:40.213772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:37.054 [2024-10-11 22:58:40.222210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.054 [2024-10-11 22:58:40.222686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.054 [2024-10-11 22:58:40.222722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.054 [2024-10-11 22:58:40.222741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.054 [2024-10-11 22:58:40.223002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.054 [2024-10-11 22:58:40.223203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.054 [2024-10-11 22:58:40.223223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.054 [2024-10-11 22:58:40.223244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.054 [2024-10-11 22:58:40.226285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.054 [2024-10-11 22:58:40.235501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.054 [2024-10-11 22:58:40.235962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.054 [2024-10-11 22:58:40.236011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.054 [2024-10-11 22:58:40.236030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.054 [2024-10-11 22:58:40.236277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.054 [2024-10-11 22:58:40.236492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.054 [2024-10-11 22:58:40.236511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.054 [2024-10-11 22:58:40.236524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.054 [2024-10-11 22:58:40.239631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.054 [2024-10-11 22:58:40.248838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.054 [2024-10-11 22:58:40.249242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.054 [2024-10-11 22:58:40.249270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.054 [2024-10-11 22:58:40.249286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.054 [2024-10-11 22:58:40.249543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.054 [2024-10-11 22:58:40.249797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.054 [2024-10-11 22:58:40.249818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.054 [2024-10-11 22:58:40.249831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.054 [2024-10-11 22:58:40.252828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.054 [2024-10-11 22:58:40.261156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:37.054 [2024-10-11 22:58:40.261210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:37.054 [2024-10-11 22:58:40.261225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:37.054 [2024-10-11 22:58:40.261236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:37.054 [2024-10-11 22:58:40.261247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:37.054 [2024-10-11 22:58:40.262244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.054 [2024-10-11 22:58:40.262624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.054 [2024-10-11 22:58:40.262655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.054 [2024-10-11 22:58:40.262672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.054 [2024-10-11 22:58:40.262635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:37.054 [2024-10-11 22:58:40.262686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:37.054 [2024-10-11 22:58:40.262782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.054 [2024-10-11 22:58:40.262915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.054 [2024-10-11 22:58:40.263153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.054 [2024-10-11 22:58:40.263174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.054 [2024-10-11 22:58:40.263188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.054 [2024-10-11 22:58:40.266369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.054 [2024-10-11 22:58:40.275732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.054 [2024-10-11 22:58:40.276234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.054 [2024-10-11 22:58:40.276272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.054 [2024-10-11 22:58:40.276291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.054 [2024-10-11 22:58:40.276537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.054 [2024-10-11 22:58:40.276784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.054 [2024-10-11 22:58:40.276806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.054 [2024-10-11 22:58:40.276822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.054 [2024-10-11 22:58:40.280061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.054 [2024-10-11 22:58:40.289224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.054 [2024-10-11 22:58:40.289751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.054 [2024-10-11 22:58:40.289788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.054 [2024-10-11 22:58:40.289807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.054 [2024-10-11 22:58:40.290051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.054 [2024-10-11 22:58:40.290270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.054 [2024-10-11 22:58:40.290290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.054 [2024-10-11 22:58:40.290306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.054 [2024-10-11 22:58:40.293586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.054 [2024-10-11 22:58:40.302794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.054 [2024-10-11 22:58:40.303242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.054 [2024-10-11 22:58:40.303280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.054 [2024-10-11 22:58:40.303298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.054 [2024-10-11 22:58:40.303520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.054 [2024-10-11 22:58:40.303758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.054 [2024-10-11 22:58:40.303781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.055 [2024-10-11 22:58:40.303806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.055 [2024-10-11 22:58:40.307129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.055 [2024-10-11 22:58:40.316449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.055 [2024-10-11 22:58:40.316895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.055 [2024-10-11 22:58:40.316931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.055 [2024-10-11 22:58:40.316949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.055 [2024-10-11 22:58:40.317170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.055 [2024-10-11 22:58:40.317401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.055 [2024-10-11 22:58:40.317422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.055 [2024-10-11 22:58:40.317437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.314 [2024-10-11 22:58:40.320739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.314 [2024-10-11 22:58:40.330085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.314 [2024-10-11 22:58:40.330582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.314 [2024-10-11 22:58:40.330620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.314 [2024-10-11 22:58:40.330640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.314 [2024-10-11 22:58:40.330885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.314 [2024-10-11 22:58:40.331100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.314 [2024-10-11 22:58:40.331120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.314 [2024-10-11 22:58:40.331136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.314 [2024-10-11 22:58:40.334297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.314 [2024-10-11 22:58:40.343701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.314 [2024-10-11 22:58:40.344129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.314 [2024-10-11 22:58:40.344165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.314 [2024-10-11 22:58:40.344185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.314 [2024-10-11 22:58:40.344420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.314 [2024-10-11 22:58:40.344646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.314 [2024-10-11 22:58:40.344668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.314 [2024-10-11 22:58:40.344683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.314 [2024-10-11 22:58:40.347813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.314 [2024-10-11 22:58:40.357362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.314 [2024-10-11 22:58:40.357719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.314 [2024-10-11 22:58:40.357756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.314 [2024-10-11 22:58:40.357773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.314 [2024-10-11 22:58:40.357987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.314 [2024-10-11 22:58:40.358205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.314 [2024-10-11 22:58:40.358226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.314 [2024-10-11 22:58:40.358239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.314 [2024-10-11 22:58:40.361483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.314 [2024-10-11 22:58:40.371107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.314 [2024-10-11 22:58:40.371433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.314 [2024-10-11 22:58:40.371463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.314 [2024-10-11 22:58:40.371480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.314 [2024-10-11 22:58:40.371703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.314 [2024-10-11 22:58:40.371921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.314 [2024-10-11 22:58:40.371942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.314 [2024-10-11 22:58:40.371955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.314 [2024-10-11 22:58:40.375176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.314 [2024-10-11 22:58:40.384753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.314 [2024-10-11 22:58:40.385124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.314 [2024-10-11 22:58:40.385151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.314 [2024-10-11 22:58:40.385167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.314 [2024-10-11 22:58:40.385394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.314 [2024-10-11 22:58:40.385632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.314 [2024-10-11 22:58:40.385654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.314 [2024-10-11 22:58:40.385668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.314 [2024-10-11 22:58:40.388969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.314 [2024-10-11 22:58:40.398296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.314 [2024-10-11 22:58:40.398695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.314 [2024-10-11 22:58:40.398723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.314 [2024-10-11 22:58:40.398739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.314 [2024-10-11 22:58:40.398953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.314 [2024-10-11 22:58:40.399190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.314 [2024-10-11 22:58:40.399218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.314 [2024-10-11 22:58:40.399231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.314 [2024-10-11 22:58:40.402433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.314 [2024-10-11 22:58:40.411768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.314 [2024-10-11 22:58:40.412143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.314 [2024-10-11 22:58:40.412171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.314 [2024-10-11 22:58:40.412187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.314 [2024-10-11 22:58:40.412400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.314 [2024-10-11 22:58:40.412653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.314 [2024-10-11 22:58:40.412675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.314 [2024-10-11 22:58:40.412688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.314 [2024-10-11 22:58:40.415855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.314 [2024-10-11 22:58:40.425205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.314 [2024-10-11 22:58:40.425573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.314 [2024-10-11 22:58:40.425603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.314 [2024-10-11 22:58:40.425620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.314 [2024-10-11 22:58:40.425834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.314 [2024-10-11 22:58:40.426060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.314 [2024-10-11 22:58:40.426080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.314 [2024-10-11 22:58:40.426093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.314 [2024-10-11 22:58:40.429269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.314 [2024-10-11 22:58:40.438666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.314 [2024-10-11 22:58:40.439038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.314 [2024-10-11 22:58:40.439067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.314 [2024-10-11 22:58:40.439083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.314 [2024-10-11 22:58:40.439296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.314 [2024-10-11 22:58:40.439523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.314 [2024-10-11 22:58:40.439575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.314 [2024-10-11 22:58:40.439596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.314 [2024-10-11 22:58:40.442766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.314 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:37.314 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:37.314 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:37.314 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:37.314 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.314 [2024-10-11 22:58:40.452265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.314 [2024-10-11 22:58:40.452634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.315 [2024-10-11 22:58:40.452662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.315 [2024-10-11 22:58:40.452678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.315 [2024-10-11 22:58:40.452891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.315 [2024-10-11 22:58:40.453117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.315 [2024-10-11 22:58:40.453138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.315 [2024-10-11 22:58:40.453152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.315 [2024-10-11 22:58:40.456409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.315 [2024-10-11 22:58:40.465893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.315 [2024-10-11 22:58:40.466300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.315 [2024-10-11 22:58:40.466328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.315 [2024-10-11 22:58:40.466344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.315 [2024-10-11 22:58:40.466566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.315 [2024-10-11 22:58:40.466784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.315 [2024-10-11 22:58:40.466805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.315 [2024-10-11 22:58:40.466818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.315 [2024-10-11 22:58:40.470032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.315 [2024-10-11 22:58:40.476003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.315 3753.67 IOPS, 14.66 MiB/s [2024-10-11T20:58:40.583Z] [2024-10-11 22:58:40.480950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.315 [2024-10-11 22:58:40.481343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.315 [2024-10-11 22:58:40.481371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.315 [2024-10-11 22:58:40.481386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.315 [2024-10-11 22:58:40.481618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.315 [2024-10-11 22:58:40.481837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.315 [2024-10-11 22:58:40.481858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.315 [2024-10-11 22:58:40.481871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.315 [2024-10-11 22:58:40.485152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.315 [2024-10-11 22:58:40.494493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.315 [2024-10-11 22:58:40.495004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.315 [2024-10-11 22:58:40.495037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.315 [2024-10-11 22:58:40.495055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.315 [2024-10-11 22:58:40.495286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.315 [2024-10-11 22:58:40.495493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.315 [2024-10-11 22:58:40.495513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.315 [2024-10-11 22:58:40.495527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.315 [2024-10-11 22:58:40.498816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.315 [2024-10-11 22:58:40.508043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.315 [2024-10-11 22:58:40.508392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.315 [2024-10-11 22:58:40.508419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.315 [2024-10-11 22:58:40.508435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.315 [2024-10-11 22:58:40.508665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.315 [2024-10-11 22:58:40.508898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.315 [2024-10-11 22:58:40.508919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.315 [2024-10-11 22:58:40.508932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.315 [2024-10-11 22:58:40.512122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.315 Malloc0 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.315 [2024-10-11 22:58:40.521690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.315 [2024-10-11 22:58:40.522076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.315 [2024-10-11 22:58:40.522104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.315 [2024-10-11 22:58:40.522121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.315 [2024-10-11 22:58:40.522350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.315 [2024-10-11 22:58:40.522592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.315 [2024-10-11 22:58:40.522614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.315 [2024-10-11 22:58:40.522629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.315 [2024-10-11 22:58:40.525894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.315 [2024-10-11 22:58:40.535284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.315 [2024-10-11 22:58:40.535689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.315 [2024-10-11 22:58:40.535717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254e00 with addr=10.0.0.2, port=4420 00:35:37.315 [2024-10-11 22:58:40.535733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254e00 is same with the state(6) to be set 00:35:37.315 [2024-10-11 22:58:40.535947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254e00 (9): Bad file descriptor 00:35:37.315 [2024-10-11 22:58:40.536172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.315 [2024-10-11 22:58:40.536192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.315 [2024-10-11 22:58:40.536206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.315 [2024-10-11 22:58:40.536874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.315 [2024-10-11 22:58:40.539494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.315 22:58:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 396742 00:35:37.315 [2024-10-11 22:58:40.548873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.573 [2024-10-11 22:58:40.586285] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:39.439 4322.29 IOPS, 16.88 MiB/s [2024-10-11T20:58:43.640Z] 4846.50 IOPS, 18.93 MiB/s [2024-10-11T20:58:44.573Z] 5271.78 IOPS, 20.59 MiB/s [2024-10-11T20:58:45.507Z] 5582.30 IOPS, 21.81 MiB/s [2024-10-11T20:58:46.879Z] 5848.36 IOPS, 22.85 MiB/s [2024-10-11T20:58:47.812Z] 6077.92 IOPS, 23.74 MiB/s [2024-10-11T20:58:48.744Z] 6268.92 IOPS, 24.49 MiB/s [2024-10-11T20:58:49.678Z] 6415.29 IOPS, 25.06 MiB/s [2024-10-11T20:58:49.678Z] 6564.53 IOPS, 25.64 MiB/s 00:35:46.410 Latency(us) 00:35:46.410 [2024-10-11T20:58:49.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.410 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:46.410 Verification LBA range: start 0x0 length 0x4000 00:35:46.410 Nvme1n1 : 15.01 6566.91 25.65 10325.33 0.00 7554.67 579.51 17185.00 00:35:46.410 [2024-10-11T20:58:49.678Z] =================================================================================================================== 00:35:46.410 [2024-10-11T20:58:49.678Z] Total : 6566.91 25.65 10325.33 0.00 7554.67 579.51 17185.00 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:46.668 rmmod nvme_tcp 00:35:46.668 rmmod nvme_fabrics 00:35:46.668 rmmod nvme_keyring 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 397445 ']' 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 397445 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 397445 ']' 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 397445 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 397445 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 397445' 00:35:46.668 killing process with pid 397445 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 397445 00:35:46.668 22:58:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 397445 00:35:46.927 22:58:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:46.927 22:58:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:46.927 22:58:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:46.927 22:58:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:46.927 22:58:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:35:46.927 22:58:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:46.927 22:58:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:35:46.927 22:58:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:46.927 22:58:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:46.927 22:58:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.927 22:58:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:46.927 22:58:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.835 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:48.835 00:35:48.835 real 0m22.510s 00:35:48.835 user 1m0.178s 00:35:48.835 sys 0m4.166s 00:35:48.835 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:48.835 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.835 ************************************ 00:35:48.836 END TEST nvmf_bdevperf 00:35:48.836 ************************************ 00:35:48.836 22:58:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:48.836 22:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:48.836 22:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:48.836 22:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.094 ************************************ 00:35:49.094 START TEST nvmf_target_disconnect 00:35:49.094 ************************************ 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:49.094 * Looking for test storage... 00:35:49.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:49.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.094 --rc genhtml_branch_coverage=1 00:35:49.094 --rc genhtml_function_coverage=1 00:35:49.094 --rc genhtml_legend=1 00:35:49.094 --rc geninfo_all_blocks=1 00:35:49.094 --rc geninfo_unexecuted_blocks=1 00:35:49.094 00:35:49.094 ' 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:49.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.094 --rc genhtml_branch_coverage=1 00:35:49.094 --rc genhtml_function_coverage=1 00:35:49.094 --rc genhtml_legend=1 00:35:49.094 --rc geninfo_all_blocks=1 00:35:49.094 --rc geninfo_unexecuted_blocks=1 00:35:49.094 00:35:49.094 ' 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:49.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.094 --rc genhtml_branch_coverage=1 00:35:49.094 --rc genhtml_function_coverage=1 00:35:49.094 --rc genhtml_legend=1 00:35:49.094 --rc geninfo_all_blocks=1 00:35:49.094 --rc geninfo_unexecuted_blocks=1 00:35:49.094 00:35:49.094 ' 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:49.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.094 --rc genhtml_branch_coverage=1 00:35:49.094 --rc genhtml_function_coverage=1 00:35:49.094 --rc genhtml_legend=1 00:35:49.094 --rc geninfo_all_blocks=1 00:35:49.094 --rc geninfo_unexecuted_blocks=1 00:35:49.094 00:35:49.094 ' 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:49.094 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:49.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:49.095 22:58:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:51.627 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:51.628 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:51.628 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:51.628 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:51.628 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:51.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:51.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:35:51.628 00:35:51.628 --- 10.0.0.2 ping statistics --- 00:35:51.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.628 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:51.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:51.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:35:51.628 00:35:51.628 --- 10.0.0.1 ping statistics --- 00:35:51.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.628 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:51.628 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:51.629 ************************************ 00:35:51.629 START TEST nvmf_target_disconnect_tc1 00:35:51.629 ************************************ 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:51.629 [2024-10-11 22:58:54.807862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.629 [2024-10-11 22:58:54.807940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7af220 with addr=10.0.0.2, port=4420 00:35:51.629 [2024-10-11 22:58:54.807975] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:51.629 [2024-10-11 22:58:54.807998] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:51.629 [2024-10-11 22:58:54.808011] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:51.629 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:51.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:51.629 Initializing NVMe Controllers 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:51.629 00:35:51.629 real 0m0.100s 00:35:51.629 user 0m0.039s 00:35:51.629 sys 0m0.059s 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:51.629 ************************************ 00:35:51.629 END TEST nvmf_target_disconnect_tc1 00:35:51.629 ************************************ 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:51.629 ************************************ 00:35:51.629 START TEST nvmf_target_disconnect_tc2 00:35:51.629 ************************************ 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=400564 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 400564 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 400564 ']' 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:51.629 22:58:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.887 [2024-10-11 22:58:54.924883] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:51.887 [2024-10-11 22:58:54.924966] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.887 [2024-10-11 22:58:54.990242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:51.887 [2024-10-11 22:58:55.037010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.887 [2024-10-11 22:58:55.037064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.887 [2024-10-11 22:58:55.037094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.887 [2024-10-11 22:58:55.037105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.887 [2024-10-11 22:58:55.037115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.887 [2024-10-11 22:58:55.038677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:51.887 [2024-10-11 22:58:55.038741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:51.887 [2024-10-11 22:58:55.038809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:51.887 [2024-10-11 22:58:55.038812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.146 Malloc0 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.146 [2024-10-11 22:58:55.252118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.146 [2024-10-11 22:58:55.280400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=400708 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:52.146 22:58:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:54.047 22:58:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 400564 00:35:54.047 22:58:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Write completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Write completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Write completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Write completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Read completed with error (sct=0, sc=8) 00:35:54.047 starting I/O failed 00:35:54.047 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 [2024-10-11 22:58:57.305229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 [2024-10-11 22:58:57.305595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 [2024-10-11 22:58:57.305907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Read completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 Write completed with error (sct=0, sc=8) 00:35:54.048 starting I/O failed 00:35:54.048 [2024-10-11 22:58:57.306229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.048 [2024-10-11 22:58:57.306377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-11 22:58:57.306416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-11 22:58:57.306559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-11 22:58:57.306588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-11 22:58:57.306716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-11 22:58:57.306743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-11 22:58:57.306873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-11 22:58:57.306899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-11 22:58:57.306998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-11 22:58:57.307024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-11 22:58:57.307111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-11 22:58:57.307138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.307254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.307281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.307363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.307389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.307496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.307521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.307642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.307680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.307783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.307810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.307920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.307947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.308044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.308074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.308166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.308192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.308304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.308330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.308414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.308439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.308521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.308559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.308651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.308677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.308767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.308792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.308947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.308973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.309116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.309142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.309224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.309249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.309351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.309377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.309496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.309522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.309629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.309657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.309752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.309778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.309867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.309894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.309989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.310014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.310105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.310132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.310240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.310286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.310438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.310465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.310586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.310613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.310707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.310732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.310837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.310862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.310949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.310974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.311055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.311081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.311166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.311192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.311304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.311330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.311442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.311467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.311562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.311593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.311674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.311700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.311786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.311813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.311931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.311957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.312074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.312113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.312209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.312237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-11 22:58:57.312352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-11 22:58:57.312380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.312466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.312492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.312622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.312649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.312734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.312760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.312872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.312898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.312992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.313018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.313134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.313160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.313242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.313272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.313404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.313430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.313538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.313571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.313657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.313683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.313766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.313792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.313904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.313930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.314029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.314055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.314163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.314189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.314267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.314291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-11 22:58:57.314409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-11 22:58:57.314434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.314534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.314586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.314678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.314705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.314827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.314861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.314953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.314979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.315081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.315109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.315199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.315225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.315368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.315394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.315503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.315528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.315624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.315649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.315731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.315757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.315880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.315907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.315993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.316018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.316128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.316152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.316257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.316282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.316398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.316424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.316539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.316581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.316708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.316733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.316815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.316846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.316964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.316990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.317110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.317136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.317274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.317299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.317440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-10-11 22:58:57.317465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-10-11 22:58:57.317574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.317600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.317718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.317744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.317822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.317847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.317940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.317966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.318077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.318103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.318175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.318200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.318326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.318365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.318503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.318546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.318725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.318752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.318857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.318885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.318981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.319008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.319103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.319130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.319243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.319270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.319385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.319410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.319524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.319558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.320675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.320701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.320835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.320861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.320951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.320976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.321064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.321089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.321174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.321199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.321315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.321341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.321452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.321477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.321586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.321625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.321753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.321792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.321927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.321954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.322066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.322092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.322212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.322238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.322349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.322376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.322464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.322490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.322612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.322651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.322739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.322765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.322967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.322993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.323078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.323103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.323217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.323243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.323383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.323408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.323492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.323525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.323640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.323665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.323755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.323780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.323902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.323928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.324023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.324049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.324129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-10-11 22:58:57.324153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-10-11 22:58:57.324243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.324271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.324357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.324383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.324474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.324502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.324631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.324657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.324747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.324786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.324890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.324918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.325026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.325053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.325164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.325189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.325289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.325317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.325467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.325494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.325625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.325652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.325746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.325771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.325899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.325947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.326029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.326054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.326146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.326171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.326312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.326337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.326477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.326503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.326634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.326662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.326775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.326803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.326892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.326917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.327032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.327058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.327147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.327179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.327291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.327317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.327467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.327493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.327616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.327643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.327735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.327761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.327862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.327887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.327997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.328022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.328162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.328186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.328272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.328296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.328433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.328457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.328539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.328580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.328669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.328693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.328774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.328800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.328893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.328919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.329025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.329054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.329177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.329203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.329296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.329321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.329434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.329459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.329563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-10-11 22:58:57.329589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-10-11 22:58:57.329675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.329700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.329781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.329806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.329904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.329929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.330005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.330030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.330115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.330142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.330246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.330271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.330361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.330400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.330492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.330519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.330619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.330645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.330734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.330759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.330853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.330879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.330983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.331007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.331082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.331107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.331192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.331219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.331311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.331349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.331434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.331461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.331614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.331640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.331730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.331755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.331899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.331924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.332035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.332060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.332173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.332200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.332340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.332374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.332490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.332516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.332639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.332665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.332751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.332775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.332910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.332934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.333050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.333076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.333212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.333237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.333350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.333374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.333507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.333546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.333664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.333691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.333810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.333834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.333950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.333976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.334148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.334199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.334349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.334374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.334463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.334489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.334598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.334624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.334709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.334734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.334850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.334875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.334958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.334982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.335065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-10-11 22:58:57.335090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-10-11 22:58:57.335239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.335264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.335350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.335375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.335513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.335539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.335639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.335668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.335763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.335788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.335874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.335900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.336010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.336036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.336124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.336159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.336267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.336293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.336380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.336406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.336519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.336544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.336646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.336671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.336788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.336812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.336897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.336922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.337032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.337056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.337174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.337201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.337279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.337305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.337435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.337473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.337577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.337606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.337725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.337752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.337869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.337894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.338010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.338036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.338137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.338162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.338257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.338283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.338404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.338431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.338519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.338544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.338692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.338716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.338793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.338817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.338914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.338939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.339033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.339057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.339140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.339166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.339254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.339279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.339374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.339402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.339524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.339557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-10-11 22:58:57.339657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-10-11 22:58:57.339695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.339786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.339813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.339903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.339930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.340043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.340099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.340237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.340296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.340408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.340433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.340519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.340546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.340683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.340709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.340813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.340838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.340946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.340971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.341054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.341079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.341180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.341205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.341344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.341371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.341459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.341484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.341588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.341616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.341709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.341735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.341858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.341884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.341964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.341989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.342104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.342131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.342255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.342294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.342416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.342442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.342530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.342566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.342705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.342731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.342856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.342881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.342970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.342995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.343138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.343163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.343279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.343307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.343405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.343430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.343561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.343587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.343668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.343693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.343811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.343836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.343940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.343965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.344073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.344100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.344240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.344266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.344352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.344378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.344469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.344495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.344617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.344643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.344760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.344785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.344907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.344932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.345014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.345039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.345115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-11 22:58:57.345145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-11 22:58:57.345257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.345283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.345424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.345450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.345571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.345597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.345680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.345706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.345819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.345844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.345984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.346009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.346102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.346128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.346275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.346313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.346406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.346435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.346565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.346605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.346707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.346733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.346857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.346883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.346991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.347017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.347159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.347210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.347319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.347345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.347463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.347488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.347584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.347610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.347725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.347751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.347888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.347913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.348006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.348030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.348136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.348161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.348243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.348268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.348392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.348420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.348537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.348569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.348695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.348720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.348834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.348859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.348950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.348976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.349091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.349116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.349204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.349230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.349330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.349369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.349470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.349508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.349613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.349642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.349725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.349751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.349833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.349859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.349979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.350005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.350086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.350115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.350200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.350226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.350357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.350396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.350517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.350545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.350669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.350701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.350810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-11 22:58:57.350836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-11 22:58:57.350950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.350975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.351063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.351093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.351185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.351210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.351300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.351325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.351430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.351455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.351536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.351572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.351675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.351701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.351782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.351807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.351888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.351914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.352042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.352067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.352154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.352179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.352287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.352314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.352400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.352426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.352646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.352672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.352781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.352806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.352919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.352944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.353030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.353057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.353145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.353171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.353322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.353361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.353476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.353505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.353606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.353634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.353723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.353749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.353860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.353885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.354012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.354037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.354148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.354203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.354291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.354316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.354449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.354475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.354591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.354619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.354733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.354758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.354834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.354859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.354940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.354965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.355115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.355140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.355341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.355366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.355449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.355476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.355594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.355620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.355715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.355740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.355824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.355850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.355967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.355993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.356074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.356099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.356211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-11 22:58:57.356236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-11 22:58:57.356364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.356402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.356496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.356534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.356622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.356649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.356745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.356770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.356854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.356879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.356993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.357018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.357172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.357226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.357332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.357357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.357544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.357577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.357695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.357721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.357831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.357856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.357935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.357960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.358051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.358079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.358195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.358224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.358317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.358343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.358461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.358487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.358605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.358632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.358750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.358776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.358913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.358938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.359076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.359125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.359276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.359331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.359449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.359475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.359600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.359638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.359768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.359806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.359927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.359954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.360074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.360129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.360335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.360387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.360529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.360566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.360686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.360712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.360798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.360823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.360921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.360945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.361058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.361085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.361208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.361233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.361306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.361331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.361413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.361438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.361570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.361609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.361730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.361757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.361872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.361897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-11 22:58:57.361993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-11 22:58:57.362060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.362146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.362171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.362290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.362318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.362439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.362466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.362548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.362582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.362729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.362754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.362846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.362871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.362960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.362986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.363100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.363127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.363241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.363266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.363407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.363446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.363546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.363584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.363728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.363754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.363832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.363858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.363948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.363979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.364086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.364111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.364255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.364280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.364475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.364500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.364668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.364707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.364802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.364830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.365014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.365068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.365202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.365261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.365375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.365400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.365518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.365546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.365639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.365665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.365777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.365803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.365891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.365916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.366070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.366115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.366300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.366353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.366465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.366491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.366683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.366709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.366814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.366839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.366977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.367002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.367169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.367223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.367336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.367363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.367448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.367475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.367592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.367619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.367712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.367739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.367828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-11 22:58:57.367854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-11 22:58:57.367970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.367995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.368104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.368129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.368243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.368275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.368363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.368389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.368469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.368495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.368636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.368663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.368752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.368777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.368889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.368914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.369016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.369042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.369127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.369154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.369241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.369267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.369355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.369380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.369499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.369525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.369620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.369646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.369755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.369780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.369876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.369903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.369990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.370015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.370109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.370134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.370220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.370247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.370329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.370355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.370455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.370494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.370615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.370642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.370771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.370798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.370910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.370936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.371047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.371072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.371197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.371236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.371368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.371407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.371494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.371521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.371618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.371644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.371757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.371784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.371876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.371903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.372024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.372050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.372171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.372196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.372274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.372300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.372383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.372411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.372494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.372521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.372650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.372680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.372797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.372825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.372915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.372941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.373096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.373145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.373223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-11 22:58:57.373249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-11 22:58:57.373364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.373391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.373471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.373503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.373619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.373658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.373744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.373772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.373884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.373910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.374110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.374162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.374282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.374335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.374425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.374452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.374540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.374573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.374696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.374723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.374838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.374864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.375004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.375030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.375169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.375222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.375310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.375336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.375458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.375484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.375576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.375603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.375698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.375723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.375834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.375859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.375947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.375971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.376111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.376136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.376225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.376250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.376360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.376385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.376498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.376523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.376617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.376646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.376764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.376791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.376933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.376959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.377034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.377059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.377171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.377197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.377310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.377341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.377422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.377448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.377563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.377589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.377727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.377753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.377849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.377875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.377988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.378013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.378150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.378199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.378315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.378339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.378427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.378453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.378540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.378570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.378661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.378686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.378770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.378795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.378937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-11 22:58:57.378962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-11 22:58:57.379075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.379100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.379220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.379246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.379349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.379374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.379468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.379494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.379635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.379660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.379767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.379792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.379872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.379897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.380011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.380037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.380130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.380155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.380262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.380287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.380396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.380421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.380504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.380531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.380662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.380687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.380779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.380804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.380883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.380913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.381022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.381047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.381155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.381179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.381259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.381284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.381421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.381445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.381563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.381589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.381730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.381755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.381845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.381871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.381979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.382003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.382144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.382169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.382284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.382309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.382440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.382479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.382597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.382626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.382712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.382738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.382864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.382891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.383007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.383033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.383157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.383182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.383268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.383295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.383432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.383457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.383571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.383596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.383679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.383704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.383818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.383844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.383928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.383953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.384069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.384095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.384205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.384230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.384350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.384388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.384488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.384516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-11 22:58:57.384610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-11 22:58:57.384641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.384730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.384756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.384870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.384897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.385005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.385031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.385125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.385150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.385264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.385290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.385372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.385397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.385480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.385504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.385584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.385609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.385723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.385748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.385836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.385860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.385945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.385970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.386079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.386103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.386216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.386240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.386354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.386393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.386517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.386544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.386645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.386672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.386765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.386791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.386903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.386929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.387037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.387063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.387201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.387229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.387317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.387342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.387421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.387446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.387526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.387555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.387647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.387672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.387751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.387775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.387888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.387913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.387987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.388019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.388102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.388126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.388240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.388267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.388347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.388373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.388470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.388508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.388644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.388673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.388788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.388814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.388911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.388936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.389021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.389048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.389132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-11 22:58:57.389157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-11 22:58:57.389242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.389269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.389411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.389436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.389547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.389579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.389665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.389690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.389810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.389835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.389950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.389975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.390100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.390125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.390236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.390261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.390365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.390391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.390504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.390528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.390628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.390655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.390738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.390763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.390881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.390906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.391043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.391068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.391174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.391199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.391299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.391324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.391410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.391437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.391562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.391592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.391678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.391711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.391817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.391842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.391959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.391984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.392076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.392100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.392213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.392248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.392358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.392382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.392461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.392487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.392585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.392610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.392721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.392747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.392831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.392865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.393002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.393027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.393143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.393167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.393286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.393310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.393435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.393460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.393582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.393608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.393745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.393770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.393889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.393914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.394019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.394044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.394144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.394169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.394286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.394310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.394399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.394424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.394536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.394571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.394659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-11 22:58:57.394684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-11 22:58:57.394764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.394789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.394903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.394928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.395041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.395067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.395152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.395181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.395302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.395340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.395454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.395481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.395616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.395644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.395724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.395750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.395837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.395869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.395984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.396009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.396122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.396148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.396227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.396253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.396359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.396385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.396469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.396496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.396593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.396619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.396709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.396735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.396879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.396903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.396994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.397021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.397134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.397160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.397275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.397303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.397419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.397445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.397535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.397574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.397688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.397716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.397799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.397825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.397943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.397969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.398084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.398110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.398200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.398239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.398362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.398390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.398472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.398497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.398598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.398625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.398756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.398795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.398892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.398920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.399030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.399057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.399202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.399228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.399344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.399372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.399457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.399481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.399604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.399631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.399714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.399739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.399896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.399946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.400085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.400136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.400250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-11 22:58:57.400302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-11 22:58:57.400415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.400442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.400537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.400574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.400662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.400689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.400775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.400801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.400943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.400968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.401109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.401136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.401251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.401276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.401420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.401449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.401601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.401628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.401740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.401765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.401908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.401934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.402050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.402077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.402192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.402217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.402357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.402383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.402498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.402526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.402667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.402694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.402797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.402824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.402908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.402933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.403007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.403031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.403124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.403148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.403263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.403316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.403400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.403425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.403575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.403603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.403747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.403802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.403957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.404007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.404187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.404241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.404351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.404377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.404459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.404484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.404637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.404664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.404779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.404809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.404903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.404930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.405046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.405073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.405186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.405212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.405325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.405351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.405463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.405489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.405645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.405672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.405763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.405789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.405911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.405936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.406048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.406073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.406149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.406173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.406285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-11 22:58:57.406312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-11 22:58:57.406413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.406440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.406591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.406618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.406736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.406762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.406860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.406887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.407027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.407053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.407195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.407221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.407309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.407337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.407467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.407506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.407610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.407637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.407728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.407753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.407839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.407873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.407989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.408015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.408212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.408239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.408329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.408357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.408499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.408525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.408610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.408640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.408758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.408784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.408880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.408907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.408985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.409011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.409158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.409209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.409358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.409384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.409495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.409519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.409608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.409634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.409717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.409744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.409890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.409947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.410029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.410054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.410216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.410268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.410358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.410384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.410471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.410497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.410594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.410621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.410711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.410738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.410859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.410885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.410981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.411007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.411145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.411171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-11 22:58:57.411258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-11 22:58:57.411286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.411390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.411415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.411527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.411558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.411648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.411674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.411751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.411777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.411882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.411907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.412017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.412043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.412134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.412159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.412243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.412269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.412358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.412383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.412460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.412484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.412577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.412602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.412681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.412708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.412792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.412818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.412933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.412960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.413077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.413103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.413242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.413268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.413383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.413409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.413524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.413556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.413671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.413698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.413816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.413842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.413980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.414005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.414124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.414149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.414267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.414294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.414436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.414461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.414607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.414633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.414746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.414774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.414862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.414887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.415003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.415029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.415139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.415165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.415243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.415268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.415411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.415437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.415564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.415592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.415709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.415734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.415809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.415834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.415957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.416009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.416159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.416212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.416326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.416352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.416467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.416492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.416616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.416655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.416813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-11 22:58:57.416863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-11 22:58:57.416950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.416978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.417092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.417118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.417203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.417229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.417344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.417371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.417455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.417482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.417577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.417604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.417747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.417773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.417896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.417928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.418092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.418136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.418320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.418363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.418520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.418546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.418646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.418674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.418811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.418837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.418957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.419005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.419223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.419273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.419383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.419409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.419538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.419584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.419687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.419716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.419831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.419862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.420007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.420033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.420175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.420202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.420293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.420320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.420422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.420449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.420560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.420587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.420702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.420728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.420834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.420860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.420969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.420995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.421113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.421139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.421340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.421368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.421498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.421537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.421668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.421696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.421836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.421864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.421973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.421999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.422143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.422194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.422310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.422338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.422483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.422510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.422632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.422658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.422745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.422771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.422855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.422881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.422992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.423018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-11 22:58:57.423110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-11 22:58:57.423135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.423213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.423239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.423351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.423378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.423491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.423518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.423627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.423664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.423759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.423785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.423876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.423903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.424000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.424031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.424134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.424172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.424267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.424294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.424388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.424427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.424519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.424546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.424656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.424682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.424768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.424794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.424888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.424913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.425007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.425033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.425144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.425169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.425275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.425301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.425387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.425412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.425507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.425535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.425636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.425663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.425785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.425810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.425882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.425908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.425999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.426025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.426116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.426154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.426263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.426290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.426406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.426433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.426514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.426541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.426631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.426657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.426776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.426814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.426960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.426987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.427107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.427133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.427227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.427253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.427339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.427364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.427463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.427502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.427640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.427668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.427759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.427786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.427912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.427937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.428053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.428079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.428218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.428244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.428380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.428406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-11 22:58:57.428545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-11 22:58:57.428577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.428667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.428693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.428807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.428835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.428941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.428966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.429060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.429087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.429167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.429192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.429272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.429303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.429441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.429466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.429563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.429590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.429683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.429708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.429821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.429847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.429931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.429958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.430070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.430095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.430210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.430236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.430315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.430342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.430449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.430488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.430594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.430622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.430738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.430764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.430847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.430872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.430980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.431005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.431094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.431118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.431318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.431344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.431458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.431483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.431574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.431600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.431688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.431712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.431802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.431827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.431939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.431967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.432049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.432074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.432167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.432192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.432305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.432332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.432496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.432535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.432677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.432716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.432847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.432874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.432963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.432993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.433108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.433134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.433222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.433248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-11 22:58:57.433331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-11 22:58:57.433358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.433471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.433497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.433654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.433683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.433832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.433859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.433953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.433978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.434065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.434091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.434204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.434229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.434320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.434346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.434461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.434487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.434579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.434606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.434696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.434722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.434816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.434842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.434951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.434976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.435118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.435144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.435263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.435290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.435442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.435480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.435600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.435639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.435736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.435764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.435843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.435870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.435951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.435977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.436083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.436108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.436219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.436246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.436355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.436380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.436467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.436494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.436584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.436616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.436722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.436747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.436837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.436863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.436984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.437011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.437097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.437122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.437204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.437232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.437322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.437347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.437489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.437516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.437604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.437629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.437711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.437736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.437848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.437873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.437953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.437980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.438092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.438119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.438227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.438252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.438335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.438362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.438479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.438506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.438646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.438685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-11 22:58:57.438808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-11 22:58:57.438836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.438927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.438954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.439042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.439068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.439180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.439207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.439317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.439342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.439470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.439509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.439629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.439657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.439740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.439766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.439867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.439893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.440003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.440029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.440121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.440160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.440309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.440336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.440418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.440446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.440537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.440576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.440716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.440741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.440862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.440888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.440963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.440988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.441074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.441099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.441181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.441208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.441343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.441382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.441513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.441560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.441663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.441691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.441812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.441837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.442023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.442106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.442255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.442304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.442491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.442516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.442658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.442685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.442797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.442822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.442962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.443009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.443202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.443243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.443456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.443499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.443689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.443716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.443802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.443827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.443936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.443962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.444044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.444069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.444186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.444232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.444457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.444499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.444676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.444702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.444840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.444865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.444997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.445038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.445237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.445277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-11 22:58:57.445469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-11 22:58:57.445533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.445660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.445687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.445778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.445804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.445944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.445970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.446137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.446187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.446322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.446372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.446485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.446510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.446656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.446682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.446820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.446845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.446983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.447034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.447117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.447143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.447282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.447307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.447399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.447427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.447543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.447574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.447669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.447696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.447784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.447809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.447924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.447949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.448100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.448139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.448291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a260 is same with the state(6) to be set 00:35:54.357 [2024-10-11 22:58:57.448446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.448485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.448621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.448649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.448762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.448788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.448883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.448909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.449090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.449146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.449298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.449347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.449534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.449567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.449683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.449709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.449795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.449821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.449934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.449959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.450095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.450144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.450230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.450256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.450354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.450379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.450476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.450514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.450612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.450640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.450729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.450755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.450877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.450904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.450984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.451009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.451106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.451132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.451212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.451239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.451372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.451411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.451501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.451528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.451653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-11 22:58:57.451680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-11 22:58:57.451767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.451792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.451883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.451908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.452026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.452051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.452168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.452192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.452271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.452296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.452413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.452442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.452564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.452592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.452705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.452731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.452828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.452856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.452971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.452997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.453136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.453162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.453304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.453329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.453432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.453471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.453630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.453658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.453749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.453775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.453911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.453937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.454025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.454051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.454135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.454162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.454257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.454284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.454393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.454418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.454559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.454594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.454688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.454718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.454804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.454829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.454943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.454969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.455143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.455204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.455332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.455373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.455495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.455521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.455626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.455652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.455739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.455764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.455872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.455897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.455973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.455998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.456141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.456166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.456278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.456304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.456407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.456432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.456572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.456612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.456722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.456760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.456868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.456895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.457014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.457064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.457179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.457205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.457297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-11 22:58:57.457323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-11 22:58:57.457467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.457494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.457632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.457661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.457750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.457778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.457869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.457895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.458009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.458035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.458177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.458202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.458340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.458367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.458494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.458520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.458676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.458710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.458831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.458856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.458988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.459029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.459279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.459342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.459514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.459539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.459642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.459669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.459808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.459833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.459975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.460016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.460128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.460175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.460380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.460421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.460592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.460634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.460721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.460746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.460834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.460863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.460947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.460972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.461167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.461209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.461381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.461422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.461561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.461588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.461675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.461701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.461791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.461816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.461937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.461976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.462100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.462127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.462300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.462363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.462479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.462506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.462604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.462631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.462748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.462774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.462888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.462914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.463027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.463081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.463271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.463323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-11 22:58:57.463437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-11 22:58:57.463463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.463540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.463572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.463661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.463686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.463804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.463830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.463912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.463938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.464060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.464086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.464166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.464191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.464307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.464333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.464446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.464472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.464637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.464676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.464837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.464877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.464980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.465007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.465096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.465121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.465223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.465248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.465356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.465382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.465489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.465516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.465616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.465645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.465724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.465749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.465838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.465872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.465956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.465980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.466071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.466097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.466204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.466230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.466377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.466406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.466523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.466557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.466675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.466701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.466838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.466871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.466962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.466988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.467170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.467219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.467331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.467357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.467484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.467524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.467639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.467678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.467810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.467838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.467998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.468039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.468178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.468227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.468432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.468471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.468611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.468637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.468714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.468741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.468827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.468852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.468959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.468984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.469092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.469183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.469397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-11 22:58:57.469460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-11 22:58:57.469605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.469632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.469756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.469795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.469923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.469951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.470069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.470095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.470205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.470230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.470371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.470397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.470512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.470540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.470670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.470697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.470797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.470836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.470961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.470987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.471135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.471182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.471310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.471369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.471493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.471519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.471654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.471693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.471787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.471814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.471926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.471951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.472057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.472096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.472275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.472317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.472529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.472603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.472692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.472719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.472833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.472865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.473000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.473026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.473244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.473287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.473448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.473489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.473632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.473658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.473789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.473827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.473954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.474006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.474095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.474122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.474304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.474352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.474450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.474479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.474607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.474645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.474765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.474792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.474977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.475027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.475216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.475268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.475461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.475529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.475686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.475711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.475803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.475829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.475947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.475973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.476090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.476115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.476240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.476280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.476436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-11 22:58:57.476462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-11 22:58:57.476571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.476597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.476689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.476716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.476830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.476865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.476971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.476997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.477206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.477248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.477467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.477508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.477706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.477745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.477831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.477858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.477937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.477964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.478112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.478162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.478340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.478391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.478484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.478510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.478607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.478634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.478753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.478778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.478900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.478926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.479017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.479044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.479211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.479255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.479421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.479447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.479530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.479572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.479656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.479681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.479768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.479793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.479920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.479950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.480074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.480116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.480312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.480355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.480480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.480511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.480652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.480690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.480792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.480820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.480961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.481010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.481095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.481120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.481202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.481228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.481325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.481364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.481480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.481506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.481608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.481633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.481747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.481772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.481884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.481909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.481996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.482021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.482164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.482191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.482293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.482349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.482455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.482494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.482649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.482677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.482772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-11 22:58:57.482801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-11 22:58:57.482914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.482941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.483079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.483106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.483335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.483376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.483497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.483524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.483619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.483645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.483750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.483776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.483900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.483926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.484046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.484072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.484185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.484233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.484442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.484504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.484680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.484706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.484809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.484835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.484933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.484960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.485166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.485246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.485442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.485504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.485620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.485647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.485756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.485781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.485880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.485906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.485988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.486014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.486131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.486181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.486322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.486376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.486571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.486597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.486687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.486712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.486824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.486860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.486989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.487014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.487171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.487212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.487436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.487478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.487639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.487666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.487774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.487799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.487889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.487914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.488002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.488027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.488137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.488180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.488366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.488409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.488578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.488621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.488735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.488760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.488871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.488895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-11 22:58:57.488998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-11 22:58:57.489023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.489143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.489168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.489391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.489430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.489582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.489608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.489688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.489713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.489826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.489851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.489962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.489988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.490103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.490128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.490246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.490298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.490560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.490612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.490712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.490751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.490856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.490884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.491028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.491055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.491171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.491197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.491334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.491372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.491490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.491517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.491620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.491647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.491729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.491756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.491840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.491866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.491947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.491973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.492062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.492089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.492174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.492201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.492312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.492338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.492448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.492475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.492607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.492645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.492778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.492817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.492938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.492965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.493091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.493152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.493230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.493256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.493342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.493367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.493456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.493483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.493583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.493622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.493711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.493740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.493880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.493906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.493987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.494013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.494093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.494118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.494237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.494264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.494343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.494369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.494475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.494514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.494613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.494640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.494778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.494805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-11 22:58:57.494896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-11 22:58:57.494922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.495007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.495032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.495146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.495173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.495259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.495286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.495361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.495387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.495476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.495502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.495589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.495615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.495724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.495750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.495832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.495857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.496011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.496039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.496156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.496183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.496292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.496317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.496439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.496464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.496571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.496601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.496693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.496719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.496833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.496859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.496977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.497002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.497133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.497171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.497292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.497319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.497463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.497501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.497601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.497629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.497719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.497747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.497828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.497854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.497955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.497981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.498091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.498115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.498227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.498253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.498375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.498402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.498483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.498509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.498595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.498622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.498710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.498736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.498822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.498847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.498935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.498960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.499101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.499126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.499236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.499261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.499368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.499393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.499503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.499529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.499631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.499669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.499760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.499788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.499881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.499907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.499996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.500021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.500134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-11 22:58:57.500160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-11 22:58:57.500245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.500271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.500386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.500412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.500520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.500546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.500634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.500659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.500747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.500773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.500891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.500917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.501026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.501051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.501164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.501189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.501308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.501346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.501432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.501459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.501547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.501640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.501756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.501781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.501895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.501925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.502042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.502068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.502153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.502180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.502331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.502370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.502509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.502547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.502657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.502684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.502799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.502824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.502933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.502959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.503042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.503069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.503150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.503176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.503267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.503296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.503387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.503412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.503525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.503558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.503703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.503729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.503825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.503851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.503937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.503962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.504099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.504138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.504299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.504339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.504462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.504487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.504600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.504627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.504722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.504748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.504857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.504883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.504970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.504995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.505105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.505134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.505230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.505268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.505384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.505411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.505508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.505534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.505634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.505661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.505774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-11 22:58:57.505800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-11 22:58:57.505913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.505938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.506124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.506164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.506284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.506327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.506488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.506516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.506667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.506695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.506833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.506858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.506947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.506972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.507153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.507217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.507358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.507402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.507613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.507640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.507782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.507807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.507896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.507926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.508043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.508068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.508227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.508268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.508498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.508525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.508648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.508675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.508788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.508813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.508906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.508932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.509072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.509114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.509304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.509342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.509524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.509556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.509666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.509691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.509810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.509836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.509919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.509944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.510026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.510052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.510153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.510204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.510425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.510465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.510633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.510658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.510774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.510799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.510881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.510907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.510989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.511015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.511157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.511196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.511324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.511374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.511582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.511631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.511722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.511747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.511835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.511860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-11 22:58:57.511951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-11 22:58:57.511977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.512114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.512139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.512294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.512357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.512462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.512500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.512601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.512629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.512744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.512770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.512861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.512886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.513005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.513030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.513142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.513168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.513328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.513368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.513511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.513536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.513659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.513686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.513806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.513834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.513957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.513983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.514090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.514116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.514250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.514281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.514398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.514423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.514534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.514565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.514677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.514703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.514814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.514839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.514922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.514948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.515088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.515114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.515193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.515218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.515296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.515321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.515431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.515456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.515612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.515638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.515722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.515747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.515823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.515851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.515935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.515961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.516063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.516101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.516196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.516223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.516325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.516364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.516483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.516510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.516606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.516634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.516720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.516746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.516908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.516933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.517097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.517123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.517253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.517297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.517473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.517517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.517691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.517717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.517862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.517901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-11 22:58:57.518062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-11 22:58:57.518102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.518315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.518361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.518511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.518536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.518630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.518655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.518765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.518789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.518870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.518895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.519073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.519112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.519303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.519342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.519455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.519481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.519591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.519617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.519731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.519757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.519863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.519902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.520040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.520095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.520232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.520286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.520425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.520451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.520541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.520577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.520666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.520692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.520804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.520829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.520999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.521048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.521186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.521236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.521381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.521406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.521518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.521547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.521653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.521680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.521823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.521855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.521941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.521992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.522172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.522212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.522346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.522386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.522532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.522573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.522659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.522690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.522780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.522806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.522974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.523015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.523145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.523188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.523360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.523399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.523577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.523616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.523727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.523766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.523892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.523919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.524002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.524029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.524115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.524141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.524285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.524334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.524476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.524503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.524607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-11 22:58:57.524638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-11 22:58:57.524718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.524743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.524875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.524924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.525104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.525153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.525255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.525318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.525406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.525431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.525570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.525597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.525681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.525706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.525784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.525809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.525948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.525974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.526064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.526089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.526168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.526196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.526280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.526307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.526399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.526438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.526522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.526554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.526646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.526676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.526764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.526789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.526934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.526959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.527070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.527120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.527255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.527280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.527375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.527405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.527524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.527558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.527682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.527708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.527791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.527835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.528013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.528052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.528257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.528297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.528445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.528471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.528592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.528617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.528702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.528727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.528840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.528892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.529003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.529055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.529136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.529163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.529279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.529305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.529418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.529443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.529532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.529568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.529684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.529710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.529791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.529818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.529966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.529992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.530104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.530129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.530244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.530270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.530381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.530407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.530503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-11 22:58:57.530531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-11 22:58:57.530656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.530688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.530774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.530799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.530885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.530911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.531011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.531038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.531124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.531149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.531296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.531349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.531459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.531485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.531575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.531601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.531717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.531742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.531823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.531850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.531990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.532015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.532153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.532200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.532274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.532299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.532383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.532410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.532509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.532534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.532647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.532672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.532755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.532781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.532877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.532903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.532979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.533007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.533116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.533142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.533229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.533254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.533378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.533416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.533536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.533569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.533680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.533706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.533795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.533820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.533928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.533955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.534036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.534062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.534179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.534206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.534303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.534328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.534441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.534466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.534563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.534589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.534697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.534722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.534828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.534853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.534966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.534991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.535105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.535132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.535211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.535236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-11 22:58:57.535345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-11 22:58:57.535370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.535449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.535474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.535574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.535601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.535686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.535710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.535816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.535841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.535963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.535988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.536068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.536094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.536178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.536202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.536291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.536315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.536421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.536445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.536561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.536586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.536663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.536690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.536775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.536801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.536912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.536937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.537012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.537037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.537142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.537167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.537240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.537264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.537356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.537395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.537527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.537578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.537676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.537704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.537791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.537819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.537927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.537953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.538067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.538094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.538202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.538229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.538335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.538375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.538459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.538486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.538601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.538628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.538713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.538739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.538823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.538850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.538992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.539017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.539170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.539209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.539362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.539422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.539601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.539629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.539746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.539772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.539863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.539888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.539967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.540009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.540119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.540159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.540329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.540372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.540531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.540565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.540681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.540706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.540880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.540921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-11 22:58:57.541114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-11 22:58:57.541155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.541285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.541337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.541471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.541512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.541652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.541678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.541802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.541828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.541917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.541944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.542127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.542169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.542393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.542434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.542593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.542621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.542714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.542740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.542829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.542854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.542965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.542991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.543103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.543128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.543245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.543287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.543401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.543444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.543653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.543692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.543819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.543846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.543928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.543962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.544074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.544100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.544268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.544316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.544422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.544448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.544593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.544619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.544757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.544782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.544867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.544892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.544977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.545003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.545102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.545139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.545259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.545286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.545423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.545462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.545586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.545614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.545726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.545751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.545867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.545892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.546022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.546066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.546231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.546271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.546393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.546440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.546547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.546581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.546657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.546683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.546805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.546848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.547070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.547108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.547269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.547308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.547488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.547517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.547613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-11 22:58:57.547640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-11 22:58:57.547742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.547780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.547917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.547960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.548131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.548172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.548348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.548396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.548510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.548537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.548688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.548727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.548910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.548962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.549152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.549201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.549408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.549469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.549560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.549587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.549671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.549697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.549815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.549840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.549980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.550023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.550128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.550182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.550294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.550319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.550436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.550461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.550580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.550606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.550698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.550724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.550806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.550831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.550904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.550929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.551033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.551059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.551146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.551175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.551293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.551319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.551397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.551422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.551517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.551542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.551637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.551663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.551775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.551800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.551915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.551941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.552084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.552133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.552273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.552322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.552436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.552462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.552585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.552612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.552704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.552730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.552811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.552836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.552922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.552947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.553103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.553143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.553307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.553347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.553489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.553531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.553649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.553674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.553764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.553789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.553935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-11 22:58:57.553960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-11 22:58:57.554143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.554184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.554308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.554352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.554507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.554604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.554728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.554755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.554875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.554901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.554985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.555012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.555102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.555127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.555326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.555366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.555528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.555597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.555719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.555745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.555830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.555854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.555967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.555992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.556121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.556159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.556333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.556372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.556513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.556576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.556674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.556699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.556796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.556821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.556910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.556934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.557048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.557074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.557190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.557215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.557381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.557421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.557576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.557602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.557689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.557714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.557799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.557824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.557940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.557965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.558081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.558135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.558262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.558303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.558446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.558487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.558658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.558685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.558839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.558877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.558994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.559048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.559179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.559233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.559321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.559347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.559428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.559453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.559540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.559584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.559709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.559738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.559824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.559850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.559940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.559968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.560109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.560135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.560222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.560247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.560342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-11 22:58:57.560372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-11 22:58:57.560492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.560519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.560621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.560648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.560765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.560820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.560902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.560929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.561016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.561042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.561125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.561151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.561259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.561284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.561391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.561417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.561529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.561565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.561682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.561708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.561785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.561810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.561919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.561944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.562073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.562098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.562184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.562210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.562295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.562321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.562406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.562432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.562521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.562547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.562642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.562667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.562781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.562806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.562883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.562909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.563026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.563051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.563125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.563150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.563260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.563285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.563403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-11 22:58:57.563429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-11 22:58:57.563518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.563544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.563664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.563690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.563805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.563831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.563912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.563937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.564029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.564054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.564139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.564164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.564283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.564308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.564385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.564411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.564486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.564511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.564605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.564630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.564723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.564749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.564845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.564883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.564978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.565007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.565099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.565126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.565245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.565269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.565361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.565388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.565472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.565498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.565592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.565618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.565731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.565760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.565877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.565902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.565994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.566019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.566109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.566135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.566247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.566271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.566357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.566382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.566493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.566518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.566610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.566636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.566745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.566770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.566884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.566910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.567024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.567049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.567123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.567148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.567257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.567282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.567417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.567442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.567563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.567589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.567673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.567699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.567790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.567815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.567900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.567925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.568037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.568063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.568144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.568170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.568284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.568311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-11 22:58:57.568396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-11 22:58:57.568421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.568504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.568528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.568625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.568651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.568731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.568758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.568850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.568875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.568985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.569011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.569099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.569129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.569220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.569246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.569348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.569373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.569467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.569493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.569581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.569606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.569688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.569712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.569829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.569855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.569951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.569976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.570068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.570093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.570238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.570263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.570350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.570375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.570456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.570480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.570590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.570615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.570724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.570750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.570842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.570867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.570942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.570967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.571080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.571105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.571216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.571241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.571349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.571374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.571491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.571516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.571609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.571634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.571749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.571773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.571859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.571883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.571967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.571991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.572108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.572133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.572220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.572245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.572326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.572351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.572432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.572461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.572581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.572607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.572723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.572748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.572843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.572867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.572953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.572978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.573062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.573087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.573162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.573186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.573289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-11 22:58:57.573328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-11 22:58:57.573421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.573448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.573540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.573576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.573666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.573693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.573797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.573824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.573939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.573965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.574054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.574079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.574197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.574223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.574305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.574332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.574418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.574443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.574521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.574547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.574641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.574666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.574749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.574774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.574875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.574914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.575008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.575035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.575121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.575147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.575259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.575284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.575390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.575415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.575525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.575556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.575633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.575658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.575750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.575780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.575873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.575899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.575986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.576012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.576098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.576123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.576207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.576233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-11 22:58:57.576343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-11 22:58:57.576382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.663 [2024-10-11 22:58:57.576535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.576602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.576688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.576714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.576838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.576890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.577029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.577078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.577224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.577273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.577361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.577386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.577477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.577507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.577603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.577629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.577722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.577749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.577843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.577868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.578027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.578066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.578186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.578225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.578341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.578394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.578541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.578608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.578695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.578721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.578817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.578843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.578941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.578966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.579073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.579098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.579232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.579272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.579439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.579479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.579612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.579639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.579743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.579781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.579873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.579901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.580050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.580089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.580211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.580250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.580392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.580431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.580584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.580628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.580710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.580736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.580825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.580851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.580964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.580989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.581120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.581158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.581287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.581326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.581448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.581487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.581630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.581656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.581763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.581794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.581878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.581904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.582010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.582048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.582171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.582209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.582343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.582382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.664 [2024-10-11 22:58:57.582500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.664 [2024-10-11 22:58:57.582537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.664 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.582668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.582693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.582779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.582805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.582912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.582951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.583160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.583198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.583352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.583390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.583525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.583575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.583696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.583722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.583833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.583858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.583950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.583976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.584065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.584090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.584237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.584274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.584396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.584434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.584581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.584607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.584724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.584750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.584833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.584859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.584955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.584982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.585142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.585181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.585378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.585416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.585556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.585613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.585704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.585729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.585817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.585843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.585938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.585964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.586046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.586071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.586162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.586188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.586305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.586330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.586415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.586441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.586533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.586566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.586686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.586711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.586818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.586844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.586930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.586956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.587075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.587100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.587188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.587214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.587354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.587394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.587527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.587576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.587686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.587717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.587797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.587822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.587930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.587955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.588079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.588117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.588273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.588322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.588508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.665 [2024-10-11 22:58:57.588577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.665 qpair failed and we were unable to recover it. 00:35:54.665 [2024-10-11 22:58:57.588715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.588745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.588839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.588865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.588949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.588975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.589122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.589160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.589291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.589328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.589473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.589513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.589653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.589681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.589798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.589823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.589922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.589948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.590106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.590144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.590305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.590344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.590497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.590536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.590705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.590731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.590812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.590838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.590925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.590950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.591090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.591116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.591208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.591235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.591326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.591351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.591494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.591532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.591674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.591715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.591843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.591881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.592007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.592047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.592203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.592240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.592407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.592445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.592576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.592616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.592747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.592785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.592920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.592960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.593079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.593117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.593231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.593270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.593399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.593437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.593567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.593606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.593767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.593805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.593961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.593999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.594135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.594174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.594320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.594369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.594493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.594536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.594693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.594733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.594863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.594901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.595025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.595063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.595252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.595291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.595451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.666 [2024-10-11 22:58:57.595490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.666 qpair failed and we were unable to recover it. 00:35:54.666 [2024-10-11 22:58:57.595659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.595698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.595817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.595854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.596002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.596041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.596188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.596225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.596394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.596432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.596603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.596642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.596764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.596803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.596935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.596974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.597091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.597132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.597269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.597308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.597465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.597502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.597639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.597677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.597862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.597901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.598058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.598096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.598296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.598335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.598505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.598545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.598717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.598755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.598884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.598923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.599081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.599120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.599268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.599306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.599506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.599548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.599695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.599734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.599868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.599906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.600056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.600094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.600211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.600250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.600404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.600442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.600607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.600646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.600768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.600806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.600916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.600954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.601108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.601148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.601307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.601348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.601474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.601514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.601654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.601694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.601826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.601871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.602037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.602077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.602203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.602243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.602394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.602432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.602579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.602620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.602779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.602820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.602994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.603032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.603193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.667 [2024-10-11 22:58:57.603230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.667 qpair failed and we were unable to recover it. 00:35:54.667 [2024-10-11 22:58:57.603383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.603422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.603599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.603653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.603855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.603908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.604129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.604180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.604397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.604453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.604631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.604685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.604873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.604910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.605071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.605110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.605268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.605307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.605433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.605473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.605641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.605680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.605802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.605841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.605999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.606039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.606171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.606211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.606384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.606425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.606584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.606624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.606776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.606818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.606959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.607001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.607128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.607167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.607345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.607389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.607521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.607571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.607747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.607789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.607949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.607989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.608113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.608154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.608304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.608364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.608504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.608547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.608744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.608786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.608938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.608980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.609141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.609182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.609361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.609404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.609577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.609621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.609753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.609793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.609934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.609974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.668 [2024-10-11 22:58:57.610125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.668 [2024-10-11 22:58:57.610166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.668 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.610292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.610332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.610479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.610518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.610648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.610687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.610840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.610882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.611014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.611055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.611194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.611235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.611434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.611476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.611610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.611652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.611781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.611823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.611945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.611984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.612120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.612160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.612323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.612364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.612506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.612547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.612692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.612731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.612903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.612942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.613102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.613143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.613319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.613359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.613498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.613538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.613686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.613725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.613901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.613943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.614110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.614151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.614276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.614316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.614442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.614481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.614657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.614699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.614842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.614881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.615049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.615096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.615252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.615293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.615424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.615463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.615662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.615702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.615828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.615868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.615985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.616026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.616163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.616204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.616410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.616449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.616581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.616621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.616755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.616796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.616924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.616965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.617095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.617136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.617293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.617334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.617476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.617517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.617681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.617721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.669 [2024-10-11 22:58:57.617869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.669 [2024-10-11 22:58:57.617909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.669 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.618024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.618065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.618235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.618278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.618458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.618497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.618641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.618681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.618814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.618855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.619022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.619064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.619188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.619230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.619369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.619412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.619583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.619628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.619772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.619815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.619995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.620038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.620222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.620265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.620414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.620455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.620651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.620692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.620849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.620890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.621039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.621087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.621232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.621272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.621405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.621445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.621608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.621650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.621773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.621812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.621982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.622023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.622199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.622239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.622390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.622432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.622583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.622643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.622831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.622883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.623054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.623098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.623235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.623278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.623441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.623484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.623669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.623712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.623889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.623931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.624117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.624161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.624347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.624390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.624526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.624583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.624743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.624785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.624922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.624961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.625118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.625158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.625289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.625329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.625454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.625495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.625679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.625724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.625885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.670 [2024-10-11 22:58:57.625929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.670 qpair failed and we were unable to recover it. 00:35:54.670 [2024-10-11 22:58:57.626069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.626112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.626253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.626296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.626515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.626569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.626713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.626752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.626920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.626977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.627145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.627188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.627368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.627410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.627652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.627696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.627838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.627879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.628050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.628094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.628276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.628319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.628507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.628560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.628694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.628736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.628904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.628945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.629167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.629209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.629391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.629434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.629632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.629676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.629815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.629860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.630071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.630114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.630286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.630330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.630501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.630543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.630686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.630730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.630938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.630981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.631156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.631202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.631346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.631398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.631590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.631646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.631810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.631851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.632018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.632058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.632176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.632216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.632358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.632401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.632587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.632627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.632741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.632780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.632964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.633021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.633214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.633258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.633424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.633483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.633665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.633710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.633882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.633924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.634097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.634138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.634319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.634361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.634501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.634545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.671 [2024-10-11 22:58:57.634757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.671 [2024-10-11 22:58:57.634802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.671 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.634963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.635005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.635155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.635198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.635382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.635424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.635574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.635618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.635791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.635836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.636011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.636054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.636200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.636242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.636407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.636451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.636580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.636625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.636815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.636862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.637012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.637058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.637246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.637292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.637451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.637491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.637672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.637712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.637866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.637907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.638076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.638117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.638263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.638322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.638472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.638512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.638642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.638682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.638855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.638895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.639025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.639065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.639235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.639276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.639451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.639491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.639662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.639713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.639836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.639877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.640036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.640075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.640196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.640235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.640369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.640436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.640652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.640694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.640851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.640894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.641074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.641115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.641297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.641338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.641522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.641576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.641709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.641752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.641902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.641943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.672 qpair failed and we were unable to recover it. 00:35:54.672 [2024-10-11 22:58:57.642074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.672 [2024-10-11 22:58:57.642112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.642288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.642327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.642462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.642502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.642695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.642744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.642926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.642969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.643097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.643139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.643287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.643329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.643484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.643529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.643735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.643778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.643924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.643969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.644142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.644186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.644354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.644398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.644543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.644598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.644778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.644821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.644990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.645032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.645190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.645234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.645385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.645448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.645644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.645688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.645880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.645921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.646068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.646109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.646234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.646274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.646508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.646547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.646708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.646767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.646944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.646988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.647170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.647215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.647346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.647391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.647640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.647685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.647867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.647927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.648119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.648170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.648330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.648377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.648575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.648620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.648800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.648841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.648973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.649015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.649164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.649207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.649365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.649408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.649560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.649603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.649755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.649799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.649932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.649973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.650147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.650219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.650409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.650469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.650707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-11 22:58:57.650749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-11 22:58:57.650893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.650935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.651080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.651121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.651267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.651310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.651470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.651512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.651670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.651713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.651833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.651877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.652009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.652050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.652205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.652248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.652430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.652473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.652652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.652697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.652900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.652947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.653151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.653196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.653337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.653383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.653529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.653589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.653790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.653833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.654005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.654047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.654182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.654225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.654406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.654449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.654631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.654675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.654808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.654850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.655024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.655068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.655195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.655238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.655361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.655403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.655541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.655594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.655774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.655818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.655995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.656040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.656183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.656225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.656369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.656419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.656612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.656658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.656844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.656889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.657056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.657102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.657286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.657331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.657515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.657571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.657732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.657783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.657967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.658012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.658172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.658217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.658374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.658419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.658621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.658667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.658810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.658856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.659003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.659045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.659205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-11 22:58:57.659248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-11 22:58:57.659418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.659466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.659642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.659685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.659834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.659879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.660020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.660068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.660227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.660272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.660456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.660501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.660700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.660746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.660938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.661003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.661217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.661262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.661459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.661509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.661718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.661764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.661915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.661964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.662121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.662166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.662320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.662366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.662543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.662600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.662793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.662838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.663005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.663049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.663235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.663279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.663447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.663492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.663668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.663714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.663911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.663955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.664126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.664170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.664319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.664361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.664508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.664563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.664745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.664787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.664954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.664999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.665145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.665198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.665425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.665469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.665632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.665675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.665869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.665914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.666091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.666136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.666290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.666334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.666521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.666577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.666763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.666808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.666971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.667014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.667217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.667261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.667411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.667454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.667658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.667701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.667841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.667883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.668011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.668071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-11 22:58:57.668239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-11 22:58:57.668283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.668428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.668473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.668608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.668654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.668839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.668886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.669060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.669105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.669283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.669329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.669465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.669509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.669733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.669783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.669953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.670000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.670199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.670264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.670455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.670500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.670649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.670696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.670868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.670912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.671104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.671150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.671295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.671342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.671498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.671543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.671732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.671776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.671959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.672005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.672159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.672208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.672364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.672408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.672596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.672642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.672801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.672847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.672981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.673026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.673200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.673245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.673384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.673428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.673613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.673659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.673860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.673918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.674086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.674131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.674295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.674340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.674570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.674634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.674816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.674861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.675032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.675077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.675250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.675295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.675483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.675530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.675700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.675745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.675883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.675927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-11 22:58:57.676139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-11 22:58:57.676184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.676375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.676422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.676582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.676629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.676786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.676831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.677021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.677067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.677268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.677316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.677480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.677528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.677739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.677784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.677967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.678011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.678248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.678296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.678509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.678567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.678749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.678799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.678950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.678998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.679161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.679210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.679397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.679447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.679666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.679715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.679863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.679910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.680073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.680120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.680309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.680355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.680583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.680631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.680798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.680846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.681001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.681048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.681284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.681328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.681541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.681599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.681776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.681820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.681988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.682056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.682211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.682259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.682473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.682516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.682732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.682777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.682923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.682969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.683121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.683174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.683388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.683436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.683581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.683630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.683842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.683887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.684027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.684072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.684211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.684279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.684508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.684563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.684747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.684791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.684953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.684996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.685128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.685172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.685345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.685392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-11 22:58:57.685578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-11 22:58:57.685648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.685853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.685898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.686069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.686113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.686306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.686356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.686528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.686584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.686785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.686832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.687034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.687081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.687296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.687344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.687490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.687537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.687748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.687795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.687988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.688036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.688251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.688299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.688476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.688527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.688760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.688806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.688992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.689036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.689264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.689309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.689565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.689611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.689779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.689826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.690021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.690065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.690239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.690284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.690421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.690467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.690618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.690663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.690806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.690850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.691018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.691068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.691260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.691306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.691503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.691562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.691766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.691814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.692028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.692074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.692281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.692333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.692517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.692630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.692804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.692852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.693030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.693098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.693327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.693372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.693592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.693637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.693778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.693824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.694022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.694073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.694243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.694293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.694496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.694541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.694759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.694807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.694966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.695015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-11 22:58:57.695203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-11 22:58:57.695267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.695454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.695499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.695650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.695696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.695887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.695931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.696088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.696134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.696343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.696393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.696609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.696655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.696829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.696873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.697034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.697078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.697262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.697307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.697496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.697540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.697775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.697821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.697994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.698040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.698189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.698233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.698452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.698503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.698721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.698766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.698980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.699030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.699181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.699228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.699384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.699432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.699595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.699659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.699809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.699854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.700045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.700090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.700246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.700292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.700441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.700488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.700691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.700737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.700956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.701000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.701214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.701260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.701406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.701451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.701633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.701679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.701861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.701913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.702056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.702101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.702306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.702351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.702525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.702610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.702772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.702816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.703001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.703045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.703247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.703295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.703489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.703535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.703802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.703847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.703987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.704032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.704204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.704248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.704390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-11 22:58:57.704435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-11 22:58:57.704607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.704652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.704822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.704887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.705098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.705143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.705346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.705391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.705594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.705640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.705802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.705847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.706054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.706102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.706301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.706349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.706540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.706613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.706797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.706843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.707036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.707087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.707288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.707335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.707508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.707560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.707735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.707781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.708095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.708143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.708352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.708400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.708584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.708632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.708787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.708834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.709027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.709078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.709295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.709342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.709505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.709568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.709776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.709825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.709985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.710033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.710218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.710266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.710490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.710537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.710757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.710804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.711068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.711134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.711321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.711388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.711628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.711690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.711867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.711914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.712130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.712175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.712338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.712389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.712577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.712626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.712813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.712858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.713041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.713086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.713254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.713321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.713462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.713511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.713695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.713745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.713983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.714032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.714222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.714290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-11 22:58:57.714444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-11 22:58:57.714492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.714703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.714751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.714974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.715022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.715185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.715233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.715414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.715461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.715630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.715679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.715873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.715921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.716062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.716109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.716323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.716370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.716569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.716618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.716788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.716838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.717035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.717085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.717302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.717351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.717534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.717594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.717755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.717803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.718011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.718061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.718254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.718327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.718507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.718574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.718748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.718797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.719016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.719070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.719271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.719325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.719485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.719539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.719807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.719861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.720040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.720094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.720280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.720334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.720605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.720657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.720832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.720900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.721079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.721144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.721300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.721351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.721568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.721638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.721810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.721880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.722089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.722158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.722360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.722408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.722543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.722603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.722798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.722846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.723051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.723117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.723311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-11 22:58:57.723358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-11 22:58:57.723529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.723637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.723813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.723880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.724033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.724082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.724277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.724325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.724488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.724537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.724749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.724816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.725021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.725089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.725252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.725301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.725457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.725507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.725704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.725753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.725954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.726001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.726227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.726275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.726469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.726519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.726709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.726775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.727056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.727123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.727314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.727361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.727535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.727597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.727772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.727842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.728058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.728134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.728298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.728348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.728538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.728615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.728808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.728857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.729037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.729085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.729234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.729282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.729475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.729523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.729739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.729787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.729950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.730001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.730196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.730244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.730409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.730457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.730662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.730712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.730903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.730951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.731119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.731166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.731335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.731383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.731611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.731660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.731827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.731884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.732054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.732103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.732244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.732292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.732487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.732537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.732743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.732792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.732983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.733056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-11 22:58:57.733221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-11 22:58:57.733272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.733416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.733465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.733681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.733731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.733876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.733923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.734077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.734151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.734351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.734403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.734658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.734726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.734913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.734982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.735190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.735257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.735455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.735503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.735690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.735740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.735942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.735990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.736186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.736234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.736460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.736508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.736727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.736793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.737019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.737086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.737319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.737366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.737521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.737599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.737767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.737823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.737991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.738039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.738237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.738284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.738438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.738486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.738678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.738728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.738905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.738952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.739097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.739144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.739337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.739387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.739582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.739632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.739809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.739880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.740065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.740137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.740340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.740390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.740547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.740605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.740761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.740809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.741040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.741112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.741281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.741333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.741497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.741545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.741725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.741774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.741940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.741987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.742152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.742198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.742386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.742436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.742715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.742783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.742959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.743029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-11 22:58:57.743285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-11 22:58:57.743351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.743548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.743606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.743771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.743839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.744062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.744128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.744327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.744382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.744564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.744613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.744810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.744857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.745076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.745145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.745341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.745388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.745600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.745650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.745839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.745887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.746178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.746245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.746441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.746489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.746688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.746759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.747017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.747084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.747274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.747323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.747515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.747572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.747769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.747817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.748043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.748109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.748277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.748347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.748506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.748563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.748774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.748824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.749008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.749075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.749267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.749317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.749517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.749587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.749779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.749827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.749983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.750030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.750230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.750278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.750449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.750496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.750664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.750715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.750936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.750984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.751202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.751270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.751431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.751480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.751680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.751749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.752001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.752068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.752261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.752308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.752450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.752498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.752724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.752791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.752973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.753038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.753192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.753240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.753423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.753471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-11 22:58:57.753644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-11 22:58:57.753693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.753887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.753935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.754129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.754177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.754397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.754452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.754675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.754747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.754971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.755041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.755204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.755251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.755441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.755490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.755704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.755781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.756025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.756096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.756330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.756378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.756532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.756591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.756773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.756840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.757068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.757136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.757344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.757391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.757600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.757651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.757881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.757949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.758148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.758214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.758415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.758462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.758697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.758765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.758953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.759021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.759212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.759261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.759459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.759506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.759762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.759830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.760016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.760083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.760263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.760313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.760469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.760521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.760722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.760791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.761045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.761112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.761320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.761368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.761534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.761595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.761815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.761883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.762090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.762159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.762317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.762365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.762530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.762606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.762799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.762848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-11 22:58:57.763002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-11 22:58:57.763049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.763207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.763255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.763458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.763507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.763688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.763738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.763890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.763938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.764164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.764213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.764382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.764430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.764581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.764640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.764840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.764889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.765100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.765167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.765365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.765413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.765652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.765720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.765969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.766036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.766222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.766270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.766443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.766491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.766666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.766715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.766851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.766898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.767080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.767128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.767320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.767368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.767523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.767584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.767797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.767865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.768085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.768151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.768389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.768437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.768650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.768719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.768907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.768973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.769227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.769291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.769474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.769521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.769696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.769746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.769987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.770054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.770196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.770244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.770401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.770450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.770697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.770774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.770983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.771049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.771275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.771322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.771522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.771582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.771750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.771817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.772032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.772098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.772309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.772358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.772506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.772569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.772785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.772850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.773016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-11 22:58:57.773082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-11 22:58:57.773231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.773279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.773429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.773476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.773655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.773706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.773963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.774030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.774173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.774221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.774421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.774469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.774712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.774770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.774958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.775007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.775190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.775238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.775427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.775474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.775679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.775728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.775875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.775923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.776064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.776114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.776273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.776323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.776484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.776532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.776774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.776821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.777023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.777071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.777216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.777265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.777419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.777466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.777679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.777727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.777921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.777969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.778157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.778205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.778349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.778399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.778597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.778647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.778806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.778856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.779018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.779068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.779263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.779311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.779468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.779516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.779714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.779762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.779957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.780005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.780192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.780240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.780404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.780452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.780682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.780731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.780915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.780964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.781160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.781207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.781404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.781453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.781625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.781676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.781882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.781930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.782104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.782152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.782356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.782405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.782548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-11 22:58:57.782612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-11 22:58:57.782770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.782818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.782972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.783020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.783176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.783224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.783468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.783517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.783686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.783736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.783921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.783974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.784127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.784174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.784356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.784401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.784584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.784632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.784812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.784857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.785037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.785082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.785244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.785290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.785508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.785564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.785710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.785754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.785921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.785966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.786179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.786225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.786408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.786453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.786589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.786635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.786782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.786827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.786985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.787031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.787189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.787234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.787416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.787461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.787620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.787666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.787827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.787871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.788041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.788086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.788234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.788282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.788466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.788510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.788671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.788717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.788903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.788949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.789091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.789136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.789282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.789327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.789469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.789515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.789723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.789769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.789945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.789992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.790172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.790219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.790374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.790419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.790564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.790611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.790802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.790847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.791018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.791065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.791251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-11 22:58:57.791296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-11 22:58:57.791488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.791530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.791677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.791720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.791864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.791907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.792042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.792086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.792224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.792267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.792391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.792441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.792585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.792629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.792797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.792839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.793009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.793051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.793201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.793244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.793380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.793422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.793625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.793669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.793800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.793844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.794014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.794058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.794192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.794235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.794403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.794447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.794599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.794643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.794842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.794906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.795060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.795106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.795266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.795309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.795460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.795502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.795688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.795732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.795889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.795933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.796072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.796116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.796243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.796286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.796429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.796472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.796649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.796693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.796863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.796905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.797042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.797084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.797263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.797306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.797485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.797527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.797740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.797783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.797931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.797993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.798135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.798175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.798314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-11 22:58:57.798354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-11 22:58:57.798488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.798527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.798669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.798709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.798823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.798863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.799032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.799073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.799251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.799291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.799434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.799474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.799659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.799699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.799819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.799859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.799999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.800040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.800196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.800236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.800396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.800437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.800586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.800628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.800763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.800805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.800949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.800989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.801119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.801159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.801289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.801328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.801464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.801503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.801671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.801712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.801844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.801885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.802044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.802084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.802216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.802256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.802453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.802493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.802669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.802709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.802846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.802885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.803025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.803071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.803211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.803253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.803424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.803463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.803617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.803658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.803780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.803820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.803954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.803994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.804123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.804163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.804308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.804348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.804478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.804520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.804693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.804751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.804880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.804922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.805077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.805115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.805245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.805283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.805412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.805451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.805623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.805663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.805789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.805827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.805957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-11 22:58:57.805995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-11 22:58:57.806184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.806223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.806388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.806429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.806568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.806607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.806730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.806768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.806889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.806927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.807054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.807093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.807214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.807251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.807404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.807442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.807566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.807606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.807770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.807807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.807968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.808017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.808165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.808204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.808336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.808374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.808496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.808535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.808686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.808724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.808887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.808925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.809067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.809105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.809225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.809263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.809394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.809432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.809571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.809610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.809745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.809782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.809918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.809956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.810122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.810160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.810292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.810331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.810459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.810497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.810629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.810668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.810782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.810820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.810939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.810978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.811164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.811203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.811355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.811393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.811528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.811573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.811701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.811739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.811847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.811885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.811993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.812031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.812187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.812225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.812414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.812453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.812584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.812622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.812785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.812827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.812932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.812968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.813090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.813127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.813278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.813314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-11 22:58:57.813435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-11 22:58:57.813471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.813596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.813633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.813754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.813790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.813928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.813964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.814113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.814150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.814264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.814300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.814484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.814520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.814675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.814712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.814870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.814906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.815043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.815080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.815246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.815283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.815438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.815474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.815632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.815669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.815799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.815835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.815983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.816019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.816175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.816211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.816323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.816358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.816508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.816546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.816676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.816712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.816830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.816867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.817013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.817050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.817232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.817268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.817395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.817431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.817570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.817607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.817761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.817798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.817913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.817949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.818111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.818148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.818305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.818341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.818485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.818522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.818680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.818717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.818923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.818977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.819207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.819261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.819458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.819513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.819703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.819751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.819911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.819958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.820215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.820271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.820478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.820536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.820752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.820800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.821021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.821069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.821290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.821344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.821617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.821666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-11 22:58:57.821892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-11 22:58:57.821939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.822198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.822253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.822438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.822493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.822725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.822773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.823030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.823098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.823386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.823441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.823638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.823687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.823927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.823982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.824196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.824252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.824477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.824531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.824777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.824825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.825046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.825100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.825379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.825433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.825656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.825707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.825920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.825975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.826228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.826282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.826592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.826640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.826812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.826860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.827122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.827169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.827417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.827471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.827681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.827730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.827940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.827977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.828137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.828173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.828292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.828334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.828505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.828541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.828755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.828802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.829034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.829088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.829333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.829388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.829626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.829675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.829877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.829944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.830165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.830213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.830410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.830463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.830665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.830714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.830953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.831008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.831215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.831269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.831482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.831537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.831723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-11 22:58:57.831780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-11 22:58:57.832005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.832059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.832240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.832294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.832490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.832544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.832741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.832798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.832985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.833040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.833257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.833311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.833517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.833612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.833869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.833923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.834094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.834149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.834391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.834445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.834655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.834711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.834969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.835024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.835256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.835307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.835568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.835633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.835825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.835880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.836100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.836154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.836331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.836386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.836599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.836655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.836786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.836822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.837062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.837116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.837301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.837357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.837536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.837602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.837884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.837942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.838142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.838200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.838436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.838494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.838758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.838814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.839062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.839117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.839345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.839401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.839582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.839656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.839912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.839966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.840164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.840219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.840472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.840526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.840766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.840821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.841005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.841059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.841273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.841352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.841607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.841663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-11 22:58:57.841850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-11 22:58:57.841907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.842145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.842204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.842438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.842497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.842729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.842788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.843057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.843126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.843360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.843419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.843635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.843695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.843929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.843991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.844226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.844283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.844489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.844548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.844832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.844891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.845118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.845176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.845407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.845464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.845720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.845781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.846046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.846104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.846345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.846403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.846668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.846728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.846961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.847020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.847221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.847280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.847578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.847643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.847844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.847904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.848087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.848146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.848410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.848468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.848752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.848813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.849080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.849138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.849375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.849433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.849679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.849740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.849928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.849987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.850257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.850315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.850544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.850622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.850900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.850937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.851125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.851161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.851414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.851476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.851732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.851793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.852040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.852101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.852368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.852427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.852663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.852723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.852877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.852936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.853166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.853225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.853397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.853484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.853695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.853754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.853995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-11 22:58:57.854053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-11 22:58:57.854278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.854337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.854609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.854647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.854795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.854831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.854999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.855062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.855359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.855423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.855691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.855757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.856051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.856109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.856350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.856386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.856515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.856562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.856775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.856834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.857058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.857117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.857353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.857412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.857647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.857707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.857977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.858036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.858303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.858362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.858636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.858673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.858826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.858862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.859123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.859187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.859434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.859496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.859799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.859859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.860106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.860165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.860395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.860452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.860751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.860816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.861122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.861187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.861437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.861500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.861826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.861922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.862230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.862268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.862423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.862459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.862775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.862841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.863132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.863196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.863494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.863587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.863852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.863918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.864101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.864164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.864417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.864481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.864750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.864815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.865051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.865086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.865204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.865240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.865361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.865399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.865623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.865688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.865892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.865955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-11 22:58:57.866246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-11 22:58:57.866309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.866522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.866598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.866817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.866880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.867078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.867141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.867443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.867506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.867822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.867886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.868102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.868168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.868423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.868487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.868759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.868824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.869118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.869182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.869477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.869539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.869815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.869879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.870127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.870185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.870435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.870498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.870765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.870830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.871121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.871185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.871470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.871533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.871832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.871896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.872201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.872264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.872505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.872586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.872840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.872906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.873196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.873260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.873548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.873628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.873830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.873893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.874114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.874179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.874419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.874482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.874741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.874807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.875043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.875106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.875390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.875454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.875679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.875744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.875989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.876063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.876352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.876417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.876632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.876698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.876944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.877007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.877241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.877304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.877547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.877633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.877882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.877949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.878241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.878304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.878596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.878661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.878863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.878927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.879166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.879230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-11 22:58:57.879470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-11 22:58:57.879534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.879824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.879888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.880093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.880159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.880421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.880485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.880763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.880828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.881115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.881178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.881472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.881535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.881854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.881918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.882167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.882230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.882516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.882598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.882884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.882947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.883235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.883297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.883567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.883633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.883881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.883944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.884213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.884276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.884478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.884543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.884853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.884916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.885160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.885222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.885416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.885479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.885740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.885805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.886098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.886163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.886373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.886437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.886741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.886806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.887016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.887080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.887312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.887375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.887623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.887688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.887935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.888002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.888304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.888367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.888659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.888724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.889018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.889092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.889279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.889342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.889628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.889693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.889917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.889980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.890200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.890262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.890569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.890633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.890923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-11 22:58:57.890986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-11 22:58:57.891256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.891318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.891575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.891640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.891894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.891959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.892253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.892315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.892619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.892684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.892898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.892960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.893211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.893276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.893547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.893643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.893897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.893962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.894224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.894287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.894586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.894651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.894951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.895014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.895258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.895321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.895573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.895641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.895924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.895986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.896274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.896337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.896635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.896700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.896951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.897013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.897236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.897300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.897545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.897627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.897929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.897991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.898281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.898344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.898645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.898712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.899014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.899076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.899361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.899424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.899624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.899693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.899943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.900005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.900250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.900313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.900578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.900643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.900900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.900963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.901212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.901277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.901526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.901624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.901860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.901924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.902116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.902199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.902401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.902463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.902745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.902810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.903022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.903085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.903318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.903381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.903600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.903665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.903870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.903933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.904165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-11 22:58:57.904228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-11 22:58:57.904464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-11 22:58:57.904526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-11 22:58:57.904815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-11 22:58:57.904878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-11 22:58:57.905120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-11 22:58:57.905182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-11 22:58:57.905429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-11 22:58:57.905492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-11 22:58:57.905798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-11 22:58:57.905861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-11 22:58:57.906157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-11 22:58:57.906220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-11 22:58:57.906515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-11 22:58:57.906593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-11 22:58:57.906901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-11 22:58:57.906964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-11 22:58:57.907221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-11 22:58:57.907288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-11 22:58:57.907491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-11 22:58:57.907577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-11 22:58:57.907865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-11 22:58:57.907928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.977 [2024-10-11 22:58:57.908217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.977 [2024-10-11 22:58:57.908279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.977 qpair failed and we were unable to recover it. 00:35:54.977 [2024-10-11 22:58:57.908579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.977 [2024-10-11 22:58:57.908646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.977 qpair failed and we were unable to recover it. 00:35:54.977 [2024-10-11 22:58:57.908892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.977 [2024-10-11 22:58:57.908955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.977 qpair failed and we were unable to recover it. 00:35:54.977 [2024-10-11 22:58:57.909202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.977 [2024-10-11 22:58:57.909266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.977 qpair failed and we were unable to recover it. 00:35:54.977 [2024-10-11 22:58:57.909493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.977 [2024-10-11 22:58:57.909593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.977 qpair failed and we were unable to recover it. 00:35:54.977 [2024-10-11 22:58:57.909819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.977 [2024-10-11 22:58:57.909881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.977 qpair failed and we were unable to recover it. 00:35:54.977 [2024-10-11 22:58:57.910123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.977 [2024-10-11 22:58:57.910186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.977 qpair failed and we were unable to recover it. 00:35:54.977 [2024-10-11 22:58:57.910433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.977 [2024-10-11 22:58:57.910496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.977 qpair failed and we were unable to recover it. 00:35:54.977 [2024-10-11 22:58:57.910773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.977 [2024-10-11 22:58:57.910837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.977 qpair failed and we were unable to recover it. 00:35:54.977 [2024-10-11 22:58:57.911055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.977 [2024-10-11 22:58:57.911119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.977 qpair failed and we were unable to recover it. 00:35:54.977 [2024-10-11 22:58:57.911317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.977 [2024-10-11 22:58:57.911382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.977 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.911638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.911703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.911973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.912041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.912247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.912311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.912579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.912644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.912928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.912990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.913237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.913300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.913513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.913593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.913855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.913918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.914179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.914241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.914525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.914622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.914907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.914981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.915224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.915287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.915495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.915581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.915882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.915946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.916225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.916287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.916575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.916640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.916873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.916936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.917183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.917244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.917541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.917640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.917927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.917989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.918246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.918308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.918606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.918672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.918878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.918940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.919147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.919212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.919444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.919510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.919746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.919809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.920080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.920142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.920389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.920452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.920720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.920784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.921069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.921132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.921331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.921394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.921645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.921708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.978 [2024-10-11 22:58:57.921995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.978 [2024-10-11 22:58:57.922058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.978 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.922321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.922384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.922662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.922726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.922966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.923029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.923274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.923337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.923599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.923663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.923932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.923995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.924287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.924349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.924598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.924662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.924866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.924931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.925194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.925257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.925505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.925600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.925866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.925930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.926184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.926246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.926431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.926496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.926770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.926835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.927124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.927187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.927385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.927451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.927725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.927799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.928000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.928067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.928359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.928422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.928671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.928736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.928992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.929055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.929305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.929371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.929622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.929687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.929975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.930039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.930334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.930397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.930691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.930754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.931002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.931067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.931319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.931383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.931628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.931692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.931915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.931978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.932241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.932305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.932572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.932636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.932893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.979 [2024-10-11 22:58:57.932956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.979 qpair failed and we were unable to recover it. 00:35:54.979 [2024-10-11 22:58:57.933251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.933314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.933505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.933594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.933848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.933911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.934168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.934231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.934494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.934575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.934887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.934951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.935248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.935311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.935579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.935643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.935885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.935948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.936138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.936200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.936423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.936485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.936724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.936788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.937041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.937104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.937394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.937457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.937692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.937758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.938048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.938111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.938361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.938423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.938633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.938700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.938945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.939009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.939262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.939327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.939546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.939625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.939832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.939895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.940123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.940186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.940395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.940478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.940730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.940797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.941040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.941105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.941346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.941408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.941676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.941741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.942029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.942093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.942337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.942402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.942657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.942722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.980 [2024-10-11 22:58:57.942965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.980 [2024-10-11 22:58:57.943031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.980 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.943325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.943388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.943700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.943764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.944015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.944082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.944288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.944351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.944601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.944667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.944946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.945011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.945303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.945367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.945566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.945631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.945846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.945910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.946209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.946272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.946523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.946622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.946920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.946983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.947270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.947333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.947587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.947652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.947901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.947964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.948249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.948312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.948568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.948633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.948820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.948884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.949177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.949240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.949483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.949547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.949856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.949920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.950178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.950244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.950542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.950620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.950916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.950979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.951235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.951301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.951607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.951672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.951959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.952023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.952277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.952344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.952622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.952687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.952933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.952997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.953283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.953347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.953597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.953671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.981 qpair failed and we were unable to recover it. 00:35:54.981 [2024-10-11 22:58:57.953991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.981 [2024-10-11 22:58:57.954055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.954297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.954361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.954617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.954681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.954880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.954946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.955242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.955307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.955567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.955634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.955887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.955952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.956242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.956306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.956520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.956603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.956858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.956923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.957122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.957185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.957426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.957491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.957719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.957786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.958047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.958113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.958406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.958470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.958701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.958767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.959009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.959072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.959274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.959341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.959607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.959672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.959969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.960033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.960274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.960337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.960633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.960697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.960986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.961050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.961248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.961311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.961571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.961636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.961926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.961990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.962184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.962246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.962455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.962519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.982 qpair failed and we were unable to recover it. 00:35:54.982 [2024-10-11 22:58:57.962806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.982 [2024-10-11 22:58:57.962870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.963073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.963136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.963383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.963447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.963665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.963731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.964023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.964085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.964330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.964393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.964595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.964660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.964892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.964954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.965201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.965264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.965519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.965600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.965796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.965860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.966108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.966189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.966492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.966569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.966837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.966900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.967114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.967180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.967429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.967495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.967785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.967850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.968140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.968204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.968385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.968447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.968699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.968764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.969063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.969127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.969374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.969439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.969716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.969783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.970002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.970064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.970306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.970369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.970626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.970692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.970938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.971003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.971252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.971317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.971579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.971642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.971784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.971819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.971958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.983 [2024-10-11 22:58:57.971993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.983 qpair failed and we were unable to recover it. 00:35:54.983 [2024-10-11 22:58:57.972143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.972177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.972290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.972325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.972467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.972502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.972668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.972702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.972818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.972850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.972997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.973030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.973180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.973213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.973429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.973502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.973660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.973697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.973815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.973850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.973973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.974025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.974219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.974275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.974379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.974413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.974570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.974605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.974779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.974812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.974991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.975058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.975359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.975422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.975636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.975670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.975835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.975869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.976167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.976230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.976503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.976538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.976729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.976762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.976866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.976899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.977010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.977043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.977198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.977263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.977529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.977615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.977789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.977822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.978017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.978079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.978285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.978351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.978604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.978639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.978768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.978800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.979033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.979066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.984 qpair failed and we were unable to recover it. 00:35:54.984 [2024-10-11 22:58:57.979281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.984 [2024-10-11 22:58:57.979344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.979562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.979620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.979769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.979803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.979927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.979992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.980204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.980266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.980510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.980587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.980790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.980824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.981074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.981136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.981442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.981476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.981644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.981677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.981821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.981854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.981985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.982018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.982153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.982220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.982419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.982481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.982665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.982699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.982850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.982904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.983150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.983213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.983518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.983616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.983728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.983762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.983951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.984014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.984214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.984276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.984524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.984614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.984771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.984804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.984981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.985014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.985158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.985221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.985485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.985547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.985739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.985772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.985909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.985988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.986279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.986342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.986593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.986627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.986759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.986791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.986987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.987054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.987366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.987430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.987710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.987743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.987863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.987926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.985 qpair failed and we were unable to recover it. 00:35:54.985 [2024-10-11 22:58:57.988126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.985 [2024-10-11 22:58:57.988160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.988305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.988383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.988617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.988650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.988762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.988796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.988989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.989052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.989292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.989355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.989556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.989590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.989744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.989776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.989998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.990061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.990314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.990379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.990624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.990659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.990775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.990808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.991040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.991073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.991275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.991338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.991630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.991663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.991826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.991889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.992134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.992198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.992441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.992505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.992785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.992850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.993139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.993204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.993413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.993488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.993798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.993862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.994154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.994218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.994502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.994584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.994862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.994926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.995211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.995274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.995481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.995544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.995864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.995927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.996173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.996235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.996482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.996548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.996872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.986 [2024-10-11 22:58:57.996936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.986 qpair failed and we were unable to recover it. 00:35:54.986 [2024-10-11 22:58:57.997208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:57.997270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:57.997583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:57.997649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:57.997909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:57.997973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:57.998272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:57.998335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:57.998598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:57.998663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:57.998882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:57.998946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:57.999182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:57.999244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:57.999501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:57.999595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:57.999888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:57.999951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.000149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.000215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.000535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.000615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.000861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.000928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.001224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.001288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.001585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.001651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.001908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.001970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.002186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.002248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.002502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.002583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.002797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.002862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.003110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.003172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.003467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.003530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.003800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.003866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.004162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.004225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.004514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.004597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.004901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.004965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.005218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.005280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.005581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.005646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.005902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.005966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.006207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.006270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.006581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.006646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.006932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.007006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.007322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.007384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.007634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.007699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.008001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.008064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.008282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.008346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.008638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.008702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.008919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.008982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.009160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.987 [2024-10-11 22:58:58.009226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.987 qpair failed and we were unable to recover it. 00:35:54.987 [2024-10-11 22:58:58.009428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.009491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.009768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.009832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.010124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.010186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.010384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.010446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.010638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.010702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.010961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.011027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.011329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.011393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.011651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.011718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.011959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.012022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.012268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.012331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.012521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.012600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.012816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.012882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.013145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.013211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.013397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.013462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.013714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.013778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.014068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.014132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.014429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.014493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.014745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.014809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.015051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.015114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.015428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.015493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.015769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.015834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.016082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.016148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.016436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.016499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.016818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.016882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.017128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.017191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.017480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.017543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.017784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.017847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.018103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.018166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.018422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.018485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.018715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.018778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.019028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.019091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.019297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.019364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.019653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.019728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.020030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.020094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.020388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.020452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.988 [2024-10-11 22:58:58.020669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.988 [2024-10-11 22:58:58.020734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.988 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.020969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.021032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.021251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.021314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.021515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.021592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.021841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.021905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.022148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.022212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.022473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.022536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.022838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.022904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.023145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.023211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.023501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.023601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.023898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.023962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.024224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.024288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.024586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.024651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.024910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.024974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.025184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.025248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.025494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.025580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.025891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.025955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.026251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.026314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.026583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.026648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.026898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.026964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.027243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.027305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.027608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.027673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.027897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.027961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.028258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.028324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.028585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.028651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.028857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.028919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.029178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.029241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.029488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.029565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.029813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.029878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.030129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.030193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.030398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.030465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.030732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.030797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.031013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.031076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.031329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.031394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.031635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.031700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.031994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.032057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.032274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.032338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.032602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.032678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.032966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.033030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.033230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.033299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.033518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.033597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.989 qpair failed and we were unable to recover it. 00:35:54.989 [2024-10-11 22:58:58.033851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.989 [2024-10-11 22:58:58.033917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.034217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.034283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.034547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.034625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.034926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.034990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.035286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.035349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.035589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.035656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.035904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.035970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.036208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.036271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.036497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.036579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.036881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.036946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.037242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.037305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.037586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.037651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.037946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.038009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.038298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.038361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.038620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.038685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.038948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.039011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.039257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.039319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.039584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.039649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.039906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.039970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.040266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.040328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.040623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.040688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.040904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.040970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.041214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.041281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.041586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.041653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.041859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.041922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.042169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.042233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.042493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.042571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.042861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.042924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.043134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.043197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.043451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.043514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.043777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.043841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.044083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.044146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.044356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.044420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.044714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.044781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.045023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.045086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.045369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.045432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.045679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.045754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.046061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.046124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.046372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.046435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.046651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.046716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.047005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.990 [2024-10-11 22:58:58.047069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.990 qpair failed and we were unable to recover it. 00:35:54.990 [2024-10-11 22:58:58.047261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.047324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.047547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.047665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.047957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.048020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.048308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.048371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.048657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.048723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.048988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.049051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.049329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.049393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.049600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.049669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.049970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.050033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.050293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.050357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.050645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.050710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.050992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.051055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.051354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.051417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.051631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.051699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.051950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.052014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.052258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.052321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.052529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.052607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.052819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.052883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.053176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.053239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.053491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.053575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.053829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.053893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.054144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.054208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.054468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.054532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.054801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.054868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.055096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.055161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.055454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.055517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.055793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.055860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.056147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.056210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.056491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.056575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.056814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.056867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.057082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.057132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.057330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.057394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.057627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.057680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.057926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.057991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.058192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.058259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.058521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.058613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.058897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.058960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.059251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.059315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.059533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.059628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.059847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.059912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.060104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.991 [2024-10-11 22:58:58.060167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.991 qpair failed and we were unable to recover it. 00:35:54.991 [2024-10-11 22:58:58.060391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.060457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.060714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.060778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.061021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.061084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.061317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.061381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.061602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.061669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.061933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.061997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.062292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.062355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.062611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.062679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.062942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.063006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.063261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.063324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.063579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.063645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.063900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.063964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.064199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.064262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.064444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.064508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.064765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.064828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.065095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.065158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.065411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.065475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.065682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.065746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.065996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.066061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.066267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.066331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.066622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.066689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.067000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.067064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.067312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.067375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.067666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.067731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.067936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.068000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.068283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.068347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.068532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.068614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.068906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.068970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.069172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.069238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.069461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.069525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.069800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.069864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.070125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.070188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.070381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.070445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.070690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.070756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.070969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.071043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-10-11 22:58:58.071245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-10-11 22:58:58.071309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.071518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.071602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.071847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.071910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.072170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.072234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.072529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.072615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.072821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.072887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.073134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.073198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.073423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.073487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.073755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.073819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.074027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.074090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.074292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.074357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.074647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.074712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.074899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.074963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.075185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.075250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.075509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.075601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.075893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.075956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.076249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.076315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.076574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.076638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.076833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.076897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.077195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.077258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.077514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.077595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.077829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.077893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.078155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.078218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.078470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.078533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.078756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.078820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.079110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.079175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.079425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.079489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.079731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.079796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.080044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.080108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.080307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.080371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.080607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.080673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.080856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.080918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.081166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.081230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.081470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.081534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.081805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.081868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.082127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.082191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.082484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.082547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.082791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.082853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.083064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.083127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.083347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.083421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.083711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-10-11 22:58:58.083776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-10-11 22:58:58.084069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.084132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.084375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.084438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.084652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.084716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.084959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.085023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.085265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.085329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.085546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.085630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.085827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.085894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.086182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.086245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.086435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.086499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.086717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.086782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.087007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.087073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.087324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.087387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.087613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.087680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.087974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.088037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.088233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.088296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.088532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.088612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.088816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.088884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.089131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.089192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.089443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.089508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.089764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.089828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.090074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.090139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.090401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.090465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.090762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.090826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.091072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.091136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.091387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.091451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.091751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.091817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.092008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.092071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.092285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.092351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.092609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.092676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.092936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.092999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.093262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.093326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.093515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.093597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.093825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.093888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.094131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.094197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.094504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.094582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.094813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.094879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.095140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.095204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.095454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.095518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.095825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.095899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.096152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-10-11 22:58:58.096216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-10-11 22:58:58.096515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.096617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.096873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.096936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.097185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.097248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.097541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.097624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.097844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.097910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.098131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.098195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.098464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.098527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.098775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.098839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.099081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.099145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.099431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.099494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.099754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.099820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.100111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.100177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.100438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.100502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.100816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.100880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.101096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.101163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.101430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.101494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.101751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.101815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.102014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.102078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.102286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.102349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.102617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.102682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.102934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.102997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.103291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.103355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.103584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.103651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.103938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.104003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.104274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.104338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.104636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.104702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.104899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.104963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.105178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.105245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.105533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.105610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.105857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.105921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.106123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.106186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.106460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.106522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.106782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.106846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.107134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.107198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.107419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.107481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.107753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.107817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.108064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.108127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.108359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.108421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.108709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.108784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.109071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.109135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.109419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-10-11 22:58:58.109486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-10-11 22:58:58.109756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.109821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.110032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.110095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.110336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.110398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.110664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.110729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.111025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.111087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.111329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.111395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.111677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.111743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.112007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.112070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.112325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.112391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.112697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.112764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.113010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.113075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.113328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.113392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.113636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.113702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.113939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.114002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.114233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.114296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.114593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.114658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.114902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.114966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.115261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.115325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.115630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.115696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.115909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.115972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.116256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.116318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.116523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.116607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.116806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.116870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.117122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.117187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.117452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.117516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.117762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.117827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.118117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.118180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.118402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.118466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.118702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.118767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.119015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.119081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.119373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.119436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.119708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.119773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.120058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.120123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.120352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.120416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.120652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.120717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.120966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.121030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-10-11 22:58:58.121335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-10-11 22:58:58.121400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.121658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.121733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.121934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.121997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.122236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.122299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.122511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.122588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.122794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.122858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.123108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.123171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.123433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.123494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.123781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.123846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.124062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.124128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.124380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.124443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.124708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.124775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.124992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.125059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.125307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.125372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.125590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.125658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.125926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.125991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.126207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.126269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.126514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.126600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.126830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.126894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.127145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.127209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.127492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.127569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.127790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.127853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.128148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.128211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.128448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.128512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.128823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.128886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.129170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.129232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.129475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.129540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.129769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.129832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.130099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.130163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.130417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.130480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.130742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.130808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.131055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.131119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.131367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.131434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.131722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.131789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.132078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.132143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.132369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.132433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.132678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.132743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.132999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.133063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.133267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.133330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.133541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.133619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.133839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-10-11 22:58:58.133904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-10-11 22:58:58.134148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.134212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.134462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.134527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.134790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.134854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.135113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.135176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.135350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.135415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.135706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.135771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.135975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.136039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.136318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.136381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.136669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.136734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.137010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.137074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.137281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.137344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.137599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.137664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.137953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.138016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.138268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.138331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.138566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.138633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.138932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.138996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.139239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.139302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.139581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.139646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.139896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.139959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.140172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.140235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.140476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.140539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.140813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.140875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.141131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.141195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.141440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.141507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.141822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.141885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.142091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.142156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.142405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.142469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.142777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.142852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.143125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.143189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.143484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.143547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.143781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.143843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.144086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.144149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.144442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.144504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.144813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.144876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.145121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.145184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.145456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.145518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.145827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.145893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.146145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.146211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.146475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.146538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.146819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.146883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-10-11 22:58:58.147126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-10-11 22:58:58.147189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.147452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.147515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.147806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.147870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.148088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.148153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.148421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.148483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.148745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.148812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.149015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.149081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.149319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.149382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.149594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.149660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.149904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.149968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.150187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.150251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.150461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.150525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.150736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.150800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.151055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.151118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.151441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.151505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.151791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.151856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.152078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.152144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.152435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.152499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.152768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.152835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.153120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.153184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.153372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.153436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.153622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.153686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.153922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.153985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.154211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.154274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.154494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.154575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.154824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.154888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.155119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.155183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.155427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.155500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.155800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.155864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.156100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.156162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.156448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.156512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.156820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.156884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.157125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.157187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.157438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.157501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.157729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.157795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.158054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.158117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.158369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.158432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.158687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.158752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.159014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.159077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.159328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.159392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.159640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.159704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-10-11 22:58:58.160003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-10-11 22:58:58.160066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.160290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.160353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.160576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.160640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.160889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.160952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.161189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.161252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.161494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.161571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.161825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.161890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.162099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.162166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.162364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.162428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.162699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.162763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.163012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.163076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.163334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.163396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.163651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.163716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.163986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.164051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.164302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.164366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.164603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.164668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.164956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.165020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.165321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.165383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.165606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.165673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.165966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.166030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.166322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.166386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.166651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.166718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.166919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.166983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.167186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.167250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.167463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.167527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.167785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.167848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.168094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.168170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.168435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.168500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.168776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.168839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.169093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.169156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.169444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.169508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.169783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.169846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.170064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.170129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.170382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.170446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.170756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.170821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.171108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.171171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-10-11 22:58:58.171392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-10-11 22:58:58.171455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.171768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.171836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.172079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.172144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.172429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.172492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.172822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.172886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.173154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.173218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.173436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.173499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.173790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.173855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.174146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.174208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.174455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.174518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.174812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.174876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.175128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.175191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.175483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.175546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.175848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.175912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.176197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.176260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.176445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.176508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.176762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.176836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.177062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.177124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.177412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.177475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.177788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.177853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.178109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.178172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.178407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.178471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.178744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.178810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.179070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.179133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.179392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.179455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.179772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.179836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.180125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.180188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.180444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.180507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.180767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.180832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.181050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.181116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.181360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.181435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.181718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.181783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.181981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.182045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.182260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.182324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.182587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.182661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.182964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.183027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.183217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.183285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.183604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.183670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.183958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.184022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.184229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.184292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.184578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-10-11 22:58:58.184645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-10-11 22:58:58.184908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.184975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.185236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.185300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.185605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.185670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.185925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.185992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.186199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.186264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.186511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.186595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.186857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.186920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.187211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.187275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.187583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.187648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.187847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.187912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.188167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.188232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.188513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.188603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.188913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.188976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.189236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.189299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.189566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.189631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.189872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.189935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.190237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.190300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.190591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.190656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.190880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.190943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.191226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.191288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.191605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.191671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.191941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.192004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.192293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.192356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.192571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.192635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.192848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.192914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.193152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.193215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.193458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.193522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.193786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.193849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.194083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.194146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.194390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.194465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.194693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.194758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.194997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.195060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.195350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.195414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.195665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.195730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.195999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.196062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.196325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.196388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.196651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.196716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.196964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.197030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.197213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.197279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.197580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.197645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-10-11 22:58:58.197940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-10-11 22:58:58.198003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.198192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.198254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.198496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.198576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.198805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.198877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.199124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.199189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.199392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.199459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.199738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.199806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.200008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.200072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.200339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.200402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.200587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.200659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.200957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.201022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.201214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.201276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.201530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.201618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.201841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.201903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.202109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.202171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.202458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.202522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.202788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.202851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.203099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.203163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.203392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.203454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.203716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.203780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.204050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.204115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.204380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.204443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.204711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.204778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.205019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.205084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.205331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.205395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.205647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.205712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.205968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.206031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.206323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.206386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.206601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.206665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.206955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.207029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.207325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.207389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.207601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.207666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.207910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.207976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.208301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.208364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.208617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.208681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.208925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.208988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.209285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.209349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.209596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.209661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.209914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.209980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.210267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.210331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.210593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.210659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-10-11 22:58:58.210914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-10-11 22:58:58.210977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.211163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.211225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.211487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.211568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.211846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.211909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.212119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.212185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.212388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.212452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.212771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.212836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.213076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.213138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.213363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.213425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.213671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.213736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.213979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.214042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.214276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.214340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.214620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.214685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.214912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.214974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.215262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.215325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.215573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.215639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.215927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.215990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.216225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.216288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.216488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.216566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.216823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.216886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.217130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.217193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.217438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.217502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.217778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.217841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.218090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.218157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.218407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.218470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.218704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.218772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.218986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.219053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.219297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.219362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.219656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.219733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.219980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.220045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.220290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.220353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.220604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.220668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.220926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.220993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.221282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.221347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.221594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.221658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.221878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.221941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.222164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.222226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.222428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.222494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.222770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.222835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.223021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.223086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-10-11 22:58:58.223326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-10-11 22:58:58.223389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.223584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.223650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.223885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.223952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.224222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.224285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.224587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.224652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.224839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.224905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.225153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.225219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.225429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.225492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.225768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.225833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.226050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.226114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.226322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.226387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.226633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.226701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.226927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.226990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.227186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.227250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.227497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.227588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.227855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.227919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.228161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.228223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.228476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.228541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.228786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-10-11 22:58:58.228852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-10-11 22:58:58.229055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.229118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.229353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.229417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.229678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.229745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.230051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.230115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.230404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.230467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.230739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.230804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.231047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.231113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.231400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.231464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.231783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.231849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.232100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.232173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.232469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.232532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.232797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.232860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.233116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.233179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.233422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.233485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.233754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.233818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.234026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.234091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.234376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.234440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.234707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-11 22:58:58.234772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-11 22:58:58.235063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.235126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.235416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.235478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.235750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.235814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.236065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.236128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.236331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.236395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.236655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.236719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.236985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.237048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.237292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.237355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.237643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.237708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.237904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.237967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.238231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.238293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.238507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.238587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.238878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.238942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.239135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.239197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.239454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.239518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.239787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.239851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.240062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.240126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.240361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.240424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.240642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.240708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.240941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.241003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.241254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.241317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.241610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.241675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.241960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.242022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.242326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.242389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.242641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.242706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.242967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.243030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.243282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.243346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.243592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.243657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.243869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.243932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-11 22:58:58.244140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-11 22:58:58.244206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.244398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.244464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.244708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.244786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.245082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.245145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.245439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.245503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.245807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.245871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.246196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.246259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.246532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.246614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.246869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.246932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.247199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.247262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.247584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.247650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.247884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.247948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.248193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.248257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.248462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.248528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.248835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.248898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.249140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.249206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.249477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.249541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.249799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.249862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.250110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.250172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.250424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.250486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.250723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.250788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.251077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.251140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.251351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.251414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.251696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.251762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.252011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.252077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.252361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.252425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.252598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.252662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.252863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.252928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.253175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.253240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.253491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-11 22:58:58.253572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-11 22:58:58.253767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.253831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.254125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.254187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.254428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.254491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.254809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.254876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.255115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.255179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.255428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.255491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.255807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.255874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.256176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.256238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.256488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.256571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.256826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.256892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.257175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.257238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.257500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.257581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.257795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.257868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.258135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.258198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.258405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.258468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.258749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.258813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.259111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.259175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.259466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.259531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.259760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.259826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.260115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.260179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.260465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.260529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.260807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.260870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.261164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.261228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.261490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.261572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.261823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.261885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.262172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.262235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.262494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.262592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.262814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.262878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-11 22:58:58.263117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-11 22:58:58.263179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.263475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.263537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.263809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.263872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.264143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.264206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.264498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.264577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.264872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.264935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.265183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.265247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.265510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.265591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.265849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.265913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.266128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.266196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.266448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.266510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.266814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.266878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.267080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.267142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.267439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.267503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.267816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.267879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.268126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.268190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.268437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.268500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.268755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.268819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.269103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.269166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.269410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.269477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.269749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.269814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.270076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.270139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.270325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.270388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.270619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.270684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.270925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.270999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.271252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.271315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.271524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.271601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.271898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.271963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.272207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.272270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.272516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-11 22:58:58.272593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-11 22:58:58.272864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.272928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.273225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.273287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.273533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.273608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.273900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.273965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.274206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.274268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.274605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.274670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.274901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.274968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.275178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.275242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.275510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.275591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.275855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.275919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.276212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.276275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.276516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.276598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.276877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.276941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.277220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.277283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.277527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.277610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.277826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.277890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.278150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.278214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.278425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.278488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.278769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.278834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.279090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.279154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.279415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.279477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.279740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.279805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.280056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.280119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.280362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.280424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.280733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.280798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.281049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.281113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.281364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.281430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.281652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.281719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.281934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.282000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-11 22:58:58.282287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-11 22:58:58.282349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.282598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.282662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.282890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.282953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.283188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.283251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.283519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.283599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.283857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.283931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.284221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.284284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.284531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.284616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.284927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.284990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.285239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.285304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.285614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.285680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.285884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.285947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.286229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.286292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.286536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.286619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.286835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.286899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.287186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.287249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.287600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.287665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.287973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.288036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.288334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.288397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.288701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.288768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.289021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.289087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.289283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.289347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.289566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.289631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-11 22:58:58.289923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-10-11 22:58:58.289987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.290225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.290289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.290586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.290651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.290947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.291010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.291300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.291363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.291666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.291730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.291982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.292045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.292313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.292376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.292620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.292686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.292945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.293011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.293218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.293281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.293532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.293609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.293853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.293917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.294177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.294239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.294485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.294548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.294840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.294903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.295198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.295260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.295507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.295591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.295856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.295919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.296206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.296268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.296522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.296616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.296873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.296935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.297214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.297277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.297502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.297586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.297854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.297917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.298175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.298241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.298490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.298574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.298873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.298937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.299164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.299226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-10-11 22:58:58.299468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-10-11 22:58:58.299530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.299795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.299858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.300141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.300204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.300502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.300585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.300888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.300951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.301202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.301265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.301579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.301644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.301912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.301975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.302264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.302326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.302605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.302673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.302967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.303033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.303272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.303335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.303546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.303629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.303865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.303928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.304172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.304237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.304432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.304495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.304822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.304886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.305195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.305259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.305510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.305600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.305823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.305884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.306134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.306206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.306489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.306569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.306856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.306920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.307182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.307246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.307468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.307532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.307831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.307895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.308135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.308199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.308408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.308471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.308699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.308766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.309013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-10-11 22:58:58.309077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-10-11 22:58:58.309335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.309398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.309612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.309679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.309905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.309968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.310185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.310248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.310464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.310529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.310787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.310851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.311067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.311131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.311376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.311440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.311715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.311780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.312030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.312094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.312338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.312402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.312697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.312762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.313024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.313088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.313371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.313435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.313685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.313751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.314045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.314109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.314316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.314380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.314601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.314667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.314897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.314961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.315259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.315323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.315519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.315599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.315824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.315889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.316176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.316240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.316496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.316576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.316842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.316907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.317101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.317166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.317408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.317471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.317747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.317812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.318056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.318120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-10-11 22:58:58.318367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-10-11 22:58:58.318433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.318657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.318732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.318982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.319046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.319293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.319356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.319584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.319650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.319861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.319925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.320189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.320253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.320435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.320498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.320753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.320817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.321046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.321110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.321338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.321402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.321686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.321752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.321959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.322023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.322251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.322314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.322512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.322590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.322851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.322917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.323162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.323226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.323471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.323536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.323787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.323852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.324138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.324201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.324490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.324568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.324862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.324929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.325191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.325254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.325449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.325512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.325736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.325800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.326045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.326108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.326333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.326396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.326659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.326725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.327022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.327086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.327356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-10-11 22:58:58.327419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-10-11 22:58:58.327719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.327783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.328076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.328139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.328429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.328492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.328772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.328837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.329126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.329189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.329474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.329537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.329741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.329804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.330058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.330121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.330309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.330376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.330660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.330726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.330959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.331022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.331307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.331381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.331633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.331697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.331985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.332048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.332296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.332359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.332570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.332635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.332849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.332912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.333103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.333168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.333461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.333525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.333801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.333865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.334108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.334173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.334420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.334483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.334794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.334860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.335077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.335140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.335394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.335460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.335737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.335802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.336091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.336153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.336353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.336416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.336661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-10-11 22:58:58.336728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-10-11 22:58:58.337024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.337087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.337377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.337441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.337650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.337716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.337957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.338019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.338314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.338377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.338630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.338695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.338935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.338998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.339213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.339275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.339572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.339638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.339952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.340016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.340267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.340330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.340585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.340652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.340906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.340969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.341228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.341294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.341533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.341614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.341931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.341995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.342261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.342323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.342583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.342649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.342947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.343011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.343210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.343273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.343516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.343597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.343852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.343916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.344116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.344195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.344433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.344496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-10-11 22:58:58.344747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-10-11 22:58:58.344810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.345059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.345123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.345437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.345500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.345804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.345868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.346132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.346196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.346454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.346517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.346864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.346929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.347222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.347285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.347571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.347635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.347858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.347923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.348174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.348238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.348528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.348611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.348888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.348951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.349260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.349323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.349635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.349701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.349997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.350060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.350314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.350376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.350662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.350728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.350980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.351046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.351337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.351399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.351649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.351715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.352014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.352077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.352330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.352393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.352655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.352720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.353008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.353072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.353371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.353434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.353663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.353727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.354006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.354070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.354324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.354387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-10-11 22:58:58.354653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-10-11 22:58:58.354717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.354967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.355031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.355328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.355392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.355697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.355762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.356052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.356114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.356359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.356426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.356680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.356748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.357003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.357068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.357323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.357386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.357632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.357709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.357907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.357972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.358220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.358285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.358567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.358632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.358929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.358992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.359234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.359297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.359502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.359579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.359892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.359955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.360250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.360313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.360587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.360673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.360968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.361032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.361301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.361365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.361611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.361678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.361875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.361938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.362154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.362218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.362469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.362533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.362857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.362921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.363171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.363235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.363432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.363499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.363814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.363879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.364088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.364151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.364331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.364394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-10-11 22:58:58.364653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-10-11 22:58:58.364718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.364909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.364973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.365220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.365285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.365584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.365650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.365901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.365965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.366270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.366333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.366543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.366622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.366883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.366946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.367118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.367180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.367441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.367505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.367742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.367806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.368003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.368065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.368259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.368323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.368525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.368606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.368895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.368958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.369201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.369267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.369541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.369626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.369865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.369929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.370184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.370257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.370521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.370621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.370920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.370984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.371225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.371289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.371534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.371619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.371919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.371982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.372231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.372295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.372489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.372571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.372822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.372886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.373190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.373254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.373515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.373597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-10-11 22:58:58.373894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-10-11 22:58:58.373957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.374214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.374277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.374611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.374677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.374944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.375007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.375297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.375362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.375616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.375682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.375940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.376004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.376209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.376275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.376542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.376624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.376874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.376940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.377228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.377296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.377517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.377595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.377883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.377946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.378204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.378267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.378596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.378661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.378919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.378984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.379236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.379300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.379547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.379632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.379923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.379988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.380234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.380300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.380544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.380627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.380828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.380893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.381134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.381199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.381482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.381546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.381828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.381892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.382147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.382212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.382469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.382533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.382804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.382868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.383115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.383181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.383469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.383543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.383850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-10-11 22:58:58.383915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-10-11 22:58:58.384158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.384223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.384487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.384572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.384829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.384894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.385184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.385247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.385463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.385527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.385762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.385825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.386059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.386122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.386371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.386433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.386673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.386739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.387022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.387086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.387341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.387404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.387592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.387655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.387915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.387980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.388228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.388294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.388509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.388587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.388884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.388948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.389203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.389268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.389473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.389535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.389847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.389910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.390156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.390219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.390512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.390596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.390894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.390957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.391243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.391308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.391523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.391608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.391867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.391930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.392201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.392267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.392512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.392607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.392819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.392884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-10-11 22:58:58.393075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-10-11 22:58:58.393138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.393378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.393443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.393747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.393812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.394073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.394136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.394397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.394461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.394737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.394801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.395109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.395172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.395460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.395525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.395787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.395849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.396095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.396159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.396406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.396478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.396746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.396813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.397077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.397142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.397392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.397455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.397723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.397788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.398028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.398091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.398302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.398367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.398601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.398666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.398908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.398970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.399189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.399255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.399496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.399575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.399842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.399905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.400157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.400221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.400519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.400598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.400810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-10-11 22:58:58.400876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-10-11 22:58:58.401124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.401187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.401488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.401565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.401790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.401853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.402053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.402117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.402374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.402437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.402643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.402708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.403000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.403063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.403271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.403336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.403548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.403636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.403927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.403990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.404232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.404295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.404607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.404672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.404909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.404973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.405224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.405289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.405563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.405628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.405839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.405904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.406204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.406268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.406578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.406643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.406891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.406956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.407244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.407307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.407488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.407567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.407836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.407899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.408147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.408210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.408501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.408596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.408891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.408956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.409153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.409232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.409487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.409571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.409801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.409864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.410040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.410102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.410301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.410364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-10-11 22:58:58.410586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-10-11 22:58:58.410651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.410910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.410974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.411263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.411330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.411518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.411596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.411852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.411915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.412158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.412221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.412487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.412565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.412819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.412884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.413168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.413231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.413483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.413546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.413851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.413915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.414206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.414269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.414516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.414601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.414887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.414950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.415195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.415261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.415566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.415630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.415890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.415953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.416177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.416241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.416526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.416622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.416837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.416899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.417162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.417224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.417460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.417524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.417838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.417903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.418154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.418218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.418416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.418484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.418764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.418829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.419128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.419190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.419446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.419511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.419743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.419808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.420103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-10-11 22:58:58.420166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-10-11 22:58:58.420419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.420481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.420743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.420808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.421067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.421131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.421379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.421441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.421691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.421758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.421954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.422033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.422284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.422349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.422650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.422715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.423007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.423071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.423336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.423399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.423681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.423715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.423827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.423860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.424024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.424057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.424199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.424232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.424349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.424382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.424600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.424664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.424914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.424977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.425228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.425292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.425491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.425572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.425845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.425908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.426102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.426165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.426412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.426474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.426740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.426805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.427066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.427129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.427371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.427434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.427710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.427774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.428033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.428096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.428322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.428384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.428634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-10-11 22:58:58.428699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-10-11 22:58:58.428944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.429007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.429307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.429369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.429618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.429682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.429936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.430000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.430248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.430311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.430600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.430666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.430951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.431014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.431321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.431384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.431640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.431705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.431953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.432016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.432265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.432329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.432585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.432651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.432850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.432912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.433156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.433222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.433511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.433589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.433846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.433908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.434198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.434271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.434578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.434644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.434934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.434997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.435280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.435346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.435646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.435710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.435954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.436020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.436234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.436296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.436536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.436619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.436847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.436910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.437195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.437258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.437495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.437590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.437846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.437911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.438175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.438238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.438433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.438499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.438796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.438861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.439172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.439236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.439445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.439508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.439784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.439848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-10-11 22:58:58.440091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-10-11 22:58:58.440154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.440396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.440463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.440698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.440765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.440983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.441046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.441333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.441396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.441693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.441759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.442042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.442105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.442345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.442407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.442700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.442765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.443027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.443090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.443308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.443372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.443627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.443692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.443917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.443980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.444225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.444287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.444605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.444671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.444930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.444993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.445298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.445360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.445610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.445675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.445964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.446026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.446317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.446379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.446593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.446659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.446901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.446964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.447240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.447313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.447600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.447664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.447866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.447931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.304 qpair failed and we were unable to recover it. 00:35:55.304 [2024-10-11 22:58:58.448174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.304 [2024-10-11 22:58:58.448237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.448484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.448547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.448835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.448898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.449150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.449212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.449471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.449533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.449813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.449876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.450164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.450227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.450498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.450579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.450823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.450886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.451077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.451142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.451346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.451411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.451713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.451778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.452071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.452134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.452374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.452437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.452657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.452722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.453027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.453090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.453337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.453403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.453695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.453760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.453948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.454011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.454258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.454322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.454526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.454605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.454866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.454930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.455171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.455234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.455447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.455513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.455811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.455875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.456125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.456188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.456415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.456477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.456743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.456808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.457033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.457096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.457341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.457403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.457704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.457770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-10-11 22:58:58.458024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-10-11 22:58:58.458087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.458374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.458436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.458736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.458799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.459095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.459158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.459401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.459463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.459691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.459757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.460051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.460115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.460414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.460477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.460791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.460856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.461150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.461213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.461471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.461537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.461870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.461933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.462126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.462188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.462431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.462494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.462764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.462828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.463116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.463179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.463426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.463491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.463752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.463816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.464064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.464127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.464437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.464500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.464808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.464871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.465155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.465217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.465509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.465592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.465816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.465879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.466164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.466226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.466472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.466535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.466861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.466923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.467178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.467241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.467523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.467607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.467824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.467886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.468133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.468196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.468462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.468527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.468857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.468920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-10-11 22:58:58.469121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-10-11 22:58:58.469194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.469420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.469483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.469744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.469808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.470011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.470075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.470325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.470387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.470696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.470761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.471004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.471067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.471294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.471353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.471592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.471656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.471901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.471963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.472250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.472312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.472600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.472664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.472956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.473019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.473308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.473370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.473677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.473744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.474033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.474096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.474338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.474401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.474625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.474690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.474876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.474938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.475217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.475280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.475528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.475605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.475856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.475918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.476161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.476226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.476466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.476529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.476793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.476856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.477065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.477125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.477426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.477487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.477731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.477796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.478082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.478145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.478399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.478462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.478743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.478808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.479100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.479163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.479451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-10-11 22:58:58.479515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-10-11 22:58:58.479833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.479895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.480139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.480202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.480458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.480521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.480769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.480834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.481085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.481147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.481399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.481462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.481704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.481770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.481961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.482034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.482236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.482302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.482595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.482661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.482905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.482968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.483211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.483277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.483579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.483643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.483861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.483923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.484176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.484239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.484538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.484633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.484897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.484969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.485200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.485263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.485518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.485598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.485851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.485913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.486159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.486221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.486479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.486542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.486821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.486883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.487185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.487248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.487458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.487521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.487799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.487863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.488151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.488214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.488420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.488483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.488764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.488827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.489093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.489155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-10-11 22:58:58.489363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-10-11 22:58:58.489429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.489647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.489713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.490008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.490072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.490363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.490427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.490689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.490755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.490974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.491039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.491332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.491395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.491621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.491685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.491957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.492019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.492275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.492337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.492541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.492637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.492899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.492962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.493221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.493284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.493671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.493737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.494023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.494086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.494286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.494353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.494607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.494671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.494961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.495035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.495335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.495398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.495616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.495680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.495867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.495928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.496210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.496273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.496524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.496601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.496851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.496925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.497171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.497235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.497452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.497517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.497793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.497856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.498161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.498223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.498485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.498548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.498793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.498859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.499151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.499213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.499464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-10-11 22:58:58.499527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-10-11 22:58:58.499750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.499814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.500093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.500156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.500348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.500411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.500669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.500734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.501024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.501087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.501305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.501368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.501656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.501721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.501976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.502040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.502255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.502320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.502573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.502639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.502886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.502949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.503241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.503303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.503530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.503611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.503888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.503952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.504213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.504275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.504528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.504611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.504903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.504967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.505188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.505250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.505532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.505614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.505917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.505980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.506281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.506344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.506600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.506665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.506913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.506980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.507228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.507292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.507584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.507649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.507899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.507971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.508213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.508277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.508601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.508667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.508930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.508991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.509249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.509309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.509564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.509626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.509910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.509971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-10-11 22:58:58.510183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-10-11 22:58:58.510244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.510531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.510610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.510901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.510962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.511171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.511230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.511469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.511530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.511809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.511870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.512125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.512186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.512394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.512454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.512730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.512792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.513031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.513091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.513283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.513343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.513627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.513690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.513899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.513962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.514230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.514291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.514540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.514615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.514820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.514879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.515165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.515225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.515471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.515532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.515746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.515805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.516097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.516157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.516430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.516494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.516742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.516806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.517023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.517090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.517344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.517407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.517616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.517682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.517881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.517947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.518246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.518310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.518507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.518587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.518834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.518897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.519140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.519204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.519489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.519566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.519762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.519825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.520026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.520094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-10-11 22:58:58.520342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-10-11 22:58:58.520417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.520726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.520792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.521045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.521109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.521412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.521475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.521749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.521814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.522020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.522084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.522330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.522393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.522606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.522672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.522896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.522961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.523249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.523313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.523564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.523629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.523862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.523937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.524201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.524265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.524461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.524524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.524787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.524851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.525095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.525158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.525362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.525426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.525718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.525783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.525980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.526045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.526259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.526324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.526581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.526646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.526889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.526952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.527152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.527214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.527462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.527526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.527749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.527814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.528060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.528125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.528386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.528450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.528696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.528761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.529009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.529072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.529272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.529338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-10-11 22:58:58.529592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-10-11 22:58:58.529659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.529855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.529918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.530173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.530235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.530449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.530512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.530742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.530806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.531026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.531089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.531300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.531365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.531622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.531689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.531942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.532005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.532255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.532318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.532567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.532643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.532893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.532957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.533167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.533230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.533452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.533516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-10-11 22:58:58.533810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-10-11 22:58:58.533874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.586 [2024-10-11 22:58:58.534062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.586 [2024-10-11 22:58:58.534126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.586 qpair failed and we were unable to recover it. 00:35:55.586 [2024-10-11 22:58:58.534375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.586 [2024-10-11 22:58:58.534438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.586 qpair failed and we were unable to recover it. 00:35:55.586 [2024-10-11 22:58:58.534669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.586 [2024-10-11 22:58:58.534734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.586 qpair failed and we were unable to recover it. 00:35:55.586 [2024-10-11 22:58:58.534955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.586 [2024-10-11 22:58:58.535020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.586 qpair failed and we were unable to recover it. 00:35:55.586 [2024-10-11 22:58:58.535319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.586 [2024-10-11 22:58:58.535383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.586 qpair failed and we were unable to recover it. 00:35:55.586 [2024-10-11 22:58:58.535633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.586 [2024-10-11 22:58:58.535698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.586 qpair failed and we were unable to recover it. 00:35:55.586 [2024-10-11 22:58:58.535949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.586 [2024-10-11 22:58:58.536014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.586 qpair failed and we were unable to recover it. 00:35:55.586 [2024-10-11 22:58:58.536214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.586 [2024-10-11 22:58:58.536281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.586 qpair failed and we were unable to recover it. 00:35:55.586 [2024-10-11 22:58:58.536579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.586 [2024-10-11 22:58:58.536653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.586 qpair failed and we were unable to recover it. 00:35:55.586 [2024-10-11 22:58:58.536876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.536940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.537144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.537208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.537426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.537491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.537726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.537790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.538003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.538068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.538321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.538384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.538590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.538655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.538884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.538946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.539223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.539287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.539526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.539608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.539862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.539925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.540221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.540284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.540531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.540613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.540834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.540898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.541097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.541163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.541457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.541521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.541819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.541884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.542080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.542142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.542396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.542459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.542685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.542750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.543054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.543118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.543373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.543437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.543655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.543720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.543979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.544041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.544225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.544288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.544545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.544626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.544831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.544905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.545199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.545262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.545507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.545591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.545807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.545871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.546066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.546132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.546375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.546440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.546694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.546759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.546964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.547029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.547310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.547374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.547613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.547678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.547927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.547990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.548219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.548283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.548473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.548538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.548767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.548831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.587 [2024-10-11 22:58:58.549091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.587 [2024-10-11 22:58:58.549153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.587 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.549394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.549457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.549738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.549802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.549995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.550059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.550278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.550342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.550595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.550660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.550940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.551004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.551252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.551317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.551587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.551654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.551903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.551967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.552216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.552279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.552465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.552528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.552748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.552813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.553016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.553081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.553306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.553369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.553610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.553674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.553960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.554023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.554249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.554311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.554565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.554628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.554908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.554971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.555212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.555276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.555466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.555529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.555819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.555882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.556131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.556194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.556396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.556458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.556685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.556750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.556939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.557013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.557252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.557314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.557583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.557648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.557902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.557966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.558246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.558308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.558606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.558673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.558875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.558939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.559177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.559241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.559523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.559622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.559846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.559908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.560200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.560264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.560515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.560599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.560848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.560911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.561125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.561189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.561445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.561508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-10-11 22:58:58.561809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-10-11 22:58:58.561874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.562082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.562145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.562429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.562492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.562760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.562824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.563086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.563149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.563391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.563454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.563680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.563745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.563999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.564064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.564378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.564441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.564686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.564752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.564962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.565026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.565278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.565340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.565616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.565682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.565877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.565941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.566234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.566298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.566586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.566650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.566842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.566905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.567105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.567177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.567374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.567436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.567669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.567733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.567978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.568041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.568282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.568348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.568611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.568678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.568921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.568984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.569229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.569292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.569488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.569580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.569853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.569916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.570156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.570221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.570502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.570583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.570838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.570901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.571187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.571249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.571459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.571520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.571793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.571856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.572105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.572168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.572417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.572479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.572737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.572801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.573010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.573074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.573284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.573346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.573591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.573657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.573888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.573952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.574190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-10-11 22:58:58.574252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-10-11 22:58:58.574501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.574579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.574842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.574905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.575146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.575207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.575409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.575471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.575696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.575760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.575999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.576061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.576276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.576338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.576612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.576677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.576910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.576975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.577207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.577270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.577510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.577594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.577813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.577890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.578141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.578204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.578444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.578506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.578721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.578784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.579049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.579112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.579302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.579368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.579636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.579702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.579902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.579965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.580173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.580238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.580479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.580542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.580818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.580883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.581149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.581212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.581456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.581521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.581715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.581789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.582053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.582115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.582310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.582373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.582613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.582679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.582898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.582959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.583164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.583227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.583451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.583513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.583816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.583879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.584071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.584134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.584323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.584385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.584606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.584673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.584932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.584995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.585227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.585290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-10-11 22:58:58.585541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-10-11 22:58:58.585622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.585882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.585945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.586125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.586187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.586419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.586481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.586752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.586816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.587005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.587068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.587323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.587386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.587674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.587739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.587928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.587990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.588281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.588344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.588602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.588668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.588861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.588926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.589177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.589239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.589488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.589567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.589818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.589882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.590136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.590199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.590390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.590453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.590713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.590778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.591066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.591129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.591429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.591492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.591707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.591771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.592036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.592099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.592355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.592419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.592659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.592723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.592975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.593041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.593291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.593355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.593606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.593670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.593909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.593982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.594237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.594300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.594591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.594655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.594899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.594964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.595225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.595288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.595585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.595649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.595943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.596006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.596249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.596315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.596611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.596676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.596905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.596967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.597266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.597328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-10-11 22:58:58.597586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-10-11 22:58:58.597652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.597896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.597959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.598197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.598258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.598527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.598607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.598861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.598924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.599211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.599272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.599524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.599602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.599857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.599920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.600145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.600207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.600457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.600520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.600817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.600881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.601093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.601156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.601453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.601515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.601846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.601909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.602203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.602265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.602529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.602615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.602870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.602933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.603205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.603268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.603516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.603600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.603859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.603922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.604219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.604282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.604515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.604597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.604896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.604960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.605200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.605262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.605543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.605625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.605927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.605990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.606231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.606293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.606533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.606614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.606914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.606978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.607237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.607311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.607618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.607683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.607895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.607958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.608172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.608239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.608530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.608607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.608855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.608919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.609175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.609239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.609445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.609510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.609834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.609897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.610187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.610251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.610509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.610589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.610845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-10-11 22:58:58.610909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-10-11 22:58:58.611152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.611215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.611462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.611525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.611857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.611921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.612117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.612183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.612387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.612454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.612769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.612833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.613075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.613141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.613403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.613466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.613739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.613804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.613999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.614062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.614236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.614299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.614585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.614650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.614924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.614987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.615200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.615267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.615531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.615620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.615939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.616003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.616205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.616271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.616563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.616628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.616877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.616941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.617217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.617280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.617600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.617664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.617925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.617993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.618277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.618339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.618591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.618658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.618948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.619012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.619253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.619316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.619579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.619646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.619937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.620001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.620241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.620306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.620621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.620686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.620951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.621014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.621305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.621367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.621668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.621733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.621921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.621984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.622199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.622262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.622512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.622596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.622885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.622947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.623248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.623312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.623570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.623635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.623892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.623955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.624219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-10-11 22:58:58.624282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-10-11 22:58:58.624542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.624625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.624880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.624944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.625190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.625253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.625480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.625545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.625877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.625940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.626193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.626258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.626507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.626593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.626884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.626947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.627164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.627229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.627496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.627580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.627804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.627867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.628151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.628214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.628449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.628513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.628789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.628856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.629103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.629177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.629462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.629525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.629743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.629809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.630014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.630077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.630378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.630440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.630665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.630731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.630970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.631034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.631294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.631356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.631637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.631702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.631927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.631990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.632220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.632283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.632577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.632641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.632889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.632953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.633206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.633268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.633468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.633534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.633814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.633878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.634167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.634229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.634478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.634540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.634807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.634872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.635170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.635233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.635532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.635615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.635859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.635922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.636168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.636231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.636474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.636540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.636833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.636896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.637095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.637159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-10-11 22:58:58.637404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-10-11 22:58:58.637467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.637803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.637868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.638128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.638190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.638478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.638541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.638857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.638920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.639165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.639231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.639518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.639602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.639922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.639985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.640186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.640252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.640513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.640594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.640789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.640853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.641101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.641167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.641414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.641481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.641723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.641791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.642000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.642076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.642369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.642431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.642725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.642790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.643036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.643100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.643310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.643371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.643635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.643699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.643937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.644000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.644284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.644347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.644629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.644694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.644942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.645009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.645260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.645325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.645538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.645619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.645920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.645983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.646189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.646252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.646472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.646536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.646766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.646829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.647069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.647131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.647331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.647394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.647644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.647710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.647966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.648029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.648227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.648294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.648535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.648617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-10-11 22:58:58.648877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-10-11 22:58:58.648941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.649194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.649257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.649581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.649646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.649843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.649907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.650201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.650264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.650462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.650525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.650798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.650861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.651109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.651174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.651464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.651527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.651817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.651880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.652120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.652182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.652439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.652503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.652751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.652813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.653066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.653130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.653383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.653447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.653693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.653757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.653997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.654062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.654278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.654344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.654598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.654683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.654935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.655001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.655252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.655316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.655611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.655675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.655875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.655938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.656142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.656203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.656399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.656462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.656745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.656810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.657095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.657157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.657445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.657508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.657837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.657900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.658182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.658244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.658484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.658547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.658775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.658837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.659076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.659140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.659365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.659428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.659678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.659742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.660033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.660096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.660337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.660400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.660698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.660762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.661061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.661123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.661349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.661412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.661725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-10-11 22:58:58.661789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-10-11 22:58:58.662044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.662107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.662391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.662455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.662755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.662818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.663027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.663092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.663396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.663459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.663783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.663847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.664092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.664154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.664447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.664509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.664787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.664850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.665078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.665141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.665369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.665431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.665696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.665760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.666045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.666108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.666359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.666422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.666664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.666730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.666929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.666992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.667286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.667349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.667596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.667672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.667864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.667929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.668177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.668243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.668539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.668621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.668909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.668973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.669184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.669247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.669474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.669536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.669869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.669933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.670192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.670258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.670470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.670533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.670824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.670887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.671137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.671202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.671447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.671511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.671798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.671862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.672166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.672229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.672518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.672600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.672861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.672924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.673214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.673277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.673538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.673639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.673881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.673945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.674227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.674289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.674591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.674655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.674875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.674941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.675185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-10-11 22:58:58.675249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-10-11 22:58:58.675530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.675618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.675913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.675977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.676190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.676253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.676547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.676635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.676861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.676924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.677133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.677198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.677486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.677571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.677827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.677892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.678146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.678210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.678462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.678524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.678816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.678880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.679171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.679234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.679491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.679571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.679802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.679865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.680105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.680167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.680447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.680510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.680791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.680864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.681120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.681181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.681474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.681536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.681841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.681904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.682129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.682192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.682414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.682477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.682799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.682863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.683058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.683120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.683361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.683426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.683640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.683704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.683992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.684054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.684341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.684405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.684611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.684676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.684859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.684924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.685148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.685214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.685504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.685584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.685839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.685903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.686165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.686228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.686465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.686529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.686841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.686904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.687160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.687222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.687430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.687492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.687810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.687874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.688160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-10-11 22:58:58.688222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-10-11 22:58:58.688480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.688542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.688809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.688872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.689111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.689173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.689594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.689658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.689935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.689999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.690249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.690311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.690582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.690645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.690931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.690993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.691247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.691311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.691580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.691646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.691868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.691931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.692233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.692296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.692584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.692649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.692936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.692999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.693290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.693354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.693660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.693724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.693975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.694051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.694309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.694373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.694672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.694736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.694949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.695014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.695210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.695275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.695533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.695612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.695907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.695970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.696260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.696322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.696520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.696605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.696862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.696925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.697234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.697296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.697514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.697610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.697857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.697920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.698164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.698227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.698479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.698543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.698819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.698882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.699144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.699207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.699465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.699529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.699791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.699854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.700098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.700161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-10-11 22:58:58.700411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-10-11 22:58:58.700474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.700681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.700745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.700975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.701039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.701296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.701358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.701605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.701670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.701923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.701986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.702231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.702293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.702541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.702619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.702853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.702915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.703180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.703241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.703448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.703513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.703785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.703849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.704139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.704201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.704487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.704563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.704827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.704890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.705141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.705203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.705489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.705585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.705840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.705905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.706156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.706219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.706502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.706586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.706848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.706921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.707167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.707231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.707474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.707538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.707849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.707912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.708165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.708227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.708483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.708546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.708848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.708911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.709204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.709266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.709580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.709646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.709832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.709899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.710156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.710220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.710465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.710527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.710841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.710905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.711204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.711267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.711540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.711621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.711844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.711907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.712111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.712177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.712417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.712479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.712750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.712815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.713107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.713170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.713473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-10-11 22:58:58.713535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-10-11 22:58:58.713776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.713841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.714090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.714153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.714397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.714460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.714719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.714782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.715072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.715135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.715384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.715447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.715775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.715839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.716088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.716150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.716410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.716473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.716744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.716808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.717013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.717076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.717314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.717375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.717672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.717736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.717958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.718021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.718265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.718331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.718580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.718644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.718906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.718969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.719212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.719274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.719603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.719668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.719887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.719963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.720219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.720284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.720578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.720642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.720892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.720955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.721243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.721306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.721502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.721584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.721800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.721862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.722129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.722191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.722478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.722541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.722820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.722883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.723138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.723200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.723475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.723540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.723822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.723885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.724174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.724235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.724495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.724576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.724833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.724897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.725084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.725150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.725393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.725457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.725727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.725792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.726047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.726109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.726355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.726426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-10-11 22:58:58.726712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-10-11 22:58:58.726778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.727087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.727149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.727361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.727424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.727667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.727732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.727938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.728001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.728300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.728364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.728633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.728698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.728961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.729025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.729263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.729327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.729635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.729699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.729988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.730052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.730345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.730408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.730703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.730767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.731060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.731123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.731366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.731432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.731690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.731755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.732053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.732116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.732357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.732422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.732676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.732740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.732958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.733031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.733320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.733383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.733650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.733714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.734005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.734068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.734334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.734400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.734655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.734721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.734914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.734974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.735214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.735275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.735596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.735661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.735948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.736011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.736306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.736369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.736654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.736718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.736961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.737025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.737325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.737388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.737660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.737724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.737915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.737978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.738267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.738330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.738526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.738620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.738849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.738911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.739152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.739215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.739463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.739525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.739795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.739861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-10-11 22:58:58.740107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-10-11 22:58:58.740172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.740459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.740522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.740786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.740849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.741095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.741158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.741410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.741473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.741746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.741811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.742104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.742166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.742463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.742526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.742793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.742856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.743078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.743140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.743339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.743403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.743661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.743726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.744025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.744088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.744365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.744428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.744674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.744738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.744937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.745005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.745252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.745315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.745580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.745644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.745926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.745999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.746189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.746255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.746503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.746578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.746876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.746939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.747231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.747294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.747572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.747637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.747886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.747952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.748201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.748263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.748526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.748610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.748823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.748886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.749143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.749206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.749456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.749517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.749781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.749845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.750143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.750206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.750505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.750587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.750893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.750956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.751153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.751219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.751469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.751531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-10-11 22:58:58.751855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-10-11 22:58:58.751919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.752180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.752243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.752508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.752592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.752846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.752910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.753105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.753166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.753419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.753482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.753745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.753809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.754106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.754169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.754466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.754528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.754809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.754872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.755098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.755160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.755358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.755425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.755688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.755753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.755997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.756061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.756347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.756410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.756663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.756728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.756944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.757008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.757232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.757295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.757563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.757627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.757830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.757893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.758179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.758242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.758536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.758613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.758906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.758980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.759282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.759346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.759651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.759716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.759975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.760038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.760281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.760344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.760589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.760654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.760952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.761014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.761302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.761365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.761617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.761681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.761939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.762002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.762238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.762301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.762538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.762616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.762874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.762938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.763153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.763219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.763482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.763545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.763788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.763854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.764098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.764163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.764353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.764416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.764678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.764742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-10-11 22:58:58.764985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-10-11 22:58:58.765049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.765284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.765346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.765632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.765697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.765949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.766015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.766297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.766360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.766658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.766723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.766939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.767003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.767245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.767308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.767608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.767673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.767956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.768020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.768267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.768331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.768529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.768615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.768860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.768926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.769153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.769215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.769466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.769528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.769849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.769912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.770155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.770220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.770455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.770517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.770803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.770866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.771163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.771225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.771471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.771535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.771800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.771874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.772072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.772137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.772387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.772454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.772758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.772823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.773017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.773083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.773296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.773361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.773581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.773646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.773932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.773996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.774281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.774345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.774588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.774654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.774946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.775009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.775298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.775362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.775603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.775688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.775973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.776037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.776254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.776317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.776578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.776642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.776847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.776912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.777153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.777215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.777509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.777586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.777849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-10-11 22:58:58.777915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-10-11 22:58:58.778122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.778184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.778441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.778503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.778768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.778832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.779039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.779103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.779364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.779427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.779724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.779788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.779985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.780048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.780300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.780363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.780615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.780679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.780969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.781031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.781274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.781337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.781587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.781653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.781896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.781960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.782207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.782270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.782507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.782582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.782787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.782851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.783144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.783210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.783460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.783523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.783819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.783882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.784182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.784245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.784536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.784620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.784881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.784944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.785194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.785257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.785510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.785597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.785890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.785953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.786202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.786266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.786474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.786541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.786820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.786883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.787178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.787241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.787532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.787615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.787905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.787968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.788213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.788278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.788586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.788652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.788869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.788934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.789173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.789238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.789472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.789535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.789812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.789875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.790122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.790185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.790436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.790498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.790736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.790800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.791056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-10-11 22:58:58.791119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-10-11 22:58:58.791326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.791391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.791686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.791751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.791995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.792059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.792344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.792407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.792646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.792713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.792946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.793009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.793232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.793312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.793525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.793603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.793819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.793885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.794124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.794188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.794434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.794499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.794774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.794838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.795089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.795153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.795402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.795465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.795772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.795836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.796095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.796158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.796405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.796468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.796738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.796802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.797070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.797133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.797378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.797441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.797755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.797821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.798072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.798134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.798320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.798385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.798636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.798703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.798952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.799016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.799299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.799363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.799569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.799632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.799853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.799916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.800164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.800226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.800519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.800594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.800839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.800901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.801162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.801225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.801527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.801623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.801934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.801997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.802305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.802368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.802585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.802653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.802904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.802968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.803259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.803323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.803586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.803651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.804007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.804071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-10-11 22:58:58.804267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-10-11 22:58:58.804333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.804594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.804659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.804951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.805015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.805201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.805263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.805519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.805611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.805830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.805893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.806081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.806154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.806378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.806441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.806764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.806829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.807068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.807131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.807374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.807440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.807740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.807805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.808097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.808178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.808396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.808450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.808703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.808758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.808986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.809040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.809207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.809261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.809515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.809584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.809802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.809864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.810103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.810165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.810389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.810452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.810672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.810737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.810981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.811046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.811290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.811358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.811652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.811717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.812018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.812082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.812304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.812369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.812627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.812693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.812912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.812975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.813172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.813237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.813478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.813542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.813853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.813917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.814126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.814191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.814470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.814534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.814801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.814867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.815121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.815187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-10-11 22:58:58.815389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-10-11 22:58:58.815452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.815727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.815792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.816051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.816114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.816384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.816446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.816654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.816719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.816921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.816985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.817211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.817274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.817585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.817651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.817902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.817966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.818195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.818258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.818521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.818610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.818867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.818931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.819150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.819215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.819473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.819536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.819790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.819852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.820082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.820143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.820386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.820452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.820735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.820801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.821011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.821077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.821384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.821449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.821690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.821755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.821958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.822023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.822270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.822333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.822574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.822639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.822951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.823016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.823224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.823288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.823523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.823606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.823866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.823933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.824201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.824267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.824581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.824648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.824894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.824959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.825246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.825311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.825584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.825649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.825849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.825913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.826116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.826180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.826421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.826485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.826700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.826766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.827014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.827080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.827295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.827361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.827613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.827679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.827899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.827963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-10-11 22:58:58.828210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-10-11 22:58:58.828276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.828528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.828610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.828830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.828897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.829098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.829163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.829431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.829494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.829768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.829832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.830050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.830115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.830420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.830484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.830759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.830824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.831077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.831150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.831401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.831466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.831721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.831786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.832054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.832117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.832377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.832440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.832717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.832782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.833034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.833099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.833336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.833402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.833621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.833661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.833802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.833840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.834026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.834065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.834223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.834262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.834419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.834459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.834632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.834672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.834808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.834848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.834997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.835035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.835204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.835242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.835370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.835409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.835534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.835582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.835694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.835732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.835840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.835878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.835993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.836031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.836175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.836213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.836346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.836385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.836539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.836588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.836710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.836749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.836875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.836913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.837051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.837090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.837218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.837256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.837419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.837457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.837592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.837631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.837747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-10-11 22:58:58.837785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-10-11 22:58:58.837937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.837976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.838167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.838207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.838364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.838404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.838538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.838593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.838760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.838799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.838925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.838964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.839111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.839149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.839314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.839352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.839513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.839567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.839729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.839767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.839935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.839974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.840106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.840144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.840268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.840307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.840473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.840511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.840656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.840696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.840854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.840892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.841025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.841063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-10-11 22:58:58.841195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-10-11 22:58:58.841234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.841388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.841447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.841658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.841703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.841861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.841913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.842130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.842183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.842399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.842451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.842606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.842646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.842838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.842877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.843004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.843042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.843170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.843209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.843337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.843376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.843503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.843541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.843679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.843718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.843839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.887 [2024-10-11 22:58:58.843878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.887 qpair failed and we were unable to recover it. 00:35:55.887 [2024-10-11 22:58:58.846708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.846774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.846987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.847051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.847294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.847360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.847616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.847680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.847907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.847973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.851782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.851881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.852146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.852215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.852469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.852536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.852862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.852927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.853199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.853263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.853529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.853613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.853832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.853897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.854115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.854154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.854273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.854311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.854432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.854471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.854622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.854662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.854810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.854848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.855009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.855059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.855256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.855295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.855422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.855460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.855620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.855660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.855803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.855842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.855999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.856038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.856162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.856201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.856337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.856378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.856530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.856580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.856741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.856780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.856934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.856973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.857138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.857177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.857310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.857350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.857510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.857564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.857712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.857753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.857915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.857955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.858076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.858115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.858243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.858281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.858449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.858489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.858619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.858658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.858793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.858832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.858990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.859030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.859204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.859244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.888 [2024-10-11 22:58:58.859394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.888 [2024-10-11 22:58:58.859433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.888 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.859572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.859612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.859743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.859782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.859938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.859978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.860263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.860362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.860618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.860690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.860902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.860968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.861228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.861293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.861479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.861544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.861814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.861880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.862176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.862240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.862491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.862571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.862845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.862910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.863138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.863202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.863452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.863515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.863779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.863844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.864098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.864163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.864412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.864475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.864748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.864814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.865048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.865145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.865475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.865547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.865811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.865877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.866098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.866165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.866418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.866483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.866765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.866830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.867058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.867122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.867378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.867442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.867703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.867800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.868063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.868130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.868379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.868443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.868719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.868787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.869054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.869120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.869414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.869478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.869746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.869813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.870036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.870101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.870332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.870397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.870682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.870722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.870891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.870930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.871065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.871106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.871267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.871305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.871462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.871501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.889 qpair failed and we were unable to recover it. 00:35:55.889 [2024-10-11 22:58:58.871662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.889 [2024-10-11 22:58:58.871704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.871865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.871904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.872070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.872111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.872284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.872331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.872493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.872531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.872703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.872743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.872913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.872952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.873129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.873224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.873456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.873526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.873796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.873865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.874165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.874231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.874522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.874611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.874822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.874864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.874999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.875038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.875263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.875331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.875632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.875700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.875886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.875950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.876240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.876307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.876506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.876590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.876858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.876922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.877181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.877246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.877507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.877548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.877713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.877773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.878070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.878136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.878357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.878425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.878666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.878733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.879018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.879058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.879249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.879313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.879586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.879675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.879884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.879949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.880225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.880290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.880498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.880538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.880770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.880836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.881054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.881121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.881349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.881418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.882449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.882524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.882814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.882880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.883093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.883162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.883459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.883526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.883755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.883824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.884132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.890 [2024-10-11 22:58:58.884197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.890 qpair failed and we were unable to recover it. 00:35:55.890 [2024-10-11 22:58:58.884416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.884481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.884745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.884812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.885126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.885205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.885415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.885482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.885711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.885771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.885905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.885944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.886231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.886296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.886503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.886594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.886869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.886935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.887187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.887251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.887568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.887635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.887917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.887982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.888234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.888299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.888516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.888599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.888856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.888922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.889184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.889248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.889519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.889603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.889864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.889929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.890115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.890180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.890482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.890547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.890842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.890882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.891038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.891077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.891341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.891381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.891516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.891567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.891791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.891856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.892082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.892147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.892439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.892505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.892787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.892826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.892951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.892990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.893275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.893351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.893615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.893683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.893947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.894013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.894258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.894322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.894579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.891 [2024-10-11 22:58:58.894646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.891 qpair failed and we were unable to recover it. 00:35:55.891 [2024-10-11 22:58:58.894903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.894970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.895261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.895326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.895585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.895644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.895769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.895808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.895975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.896014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.896254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.896320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.896607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.896675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.896935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.897000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.897269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.897345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.897620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.897688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.897892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.897957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.898185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.898249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.898501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.898579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.898843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.898907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.899207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.899272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.899515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.899593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.899816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.899881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.900138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.900197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.900335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.900374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.900621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.900690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.900902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.900968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.901221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.901286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.901534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.901622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.901877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.901918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.902080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.902120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.902413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.902452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.902631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.902672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.902946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.903013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.903326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.903390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.903630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.903671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.903792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.903833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.904051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.904117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.904387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.904454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.904734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.904776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.904901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.904940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.905130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.905189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.905349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.905390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.906082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.906140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.906324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.906378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.906644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.906719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.892 [2024-10-11 22:58:58.906933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.892 [2024-10-11 22:58:58.906985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.892 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.907206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.907264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.907473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.907529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.907788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.907859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.908153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.908225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.908471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.908523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.908809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.908862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.909100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.909172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.909426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.909488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.909705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.909777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.909935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.909990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.910235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.910307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.910523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.910593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.910802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.910890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.911121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.911194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.911412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.911464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.911669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.911751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.912005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.912077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.912318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.912391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.912613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.912691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.912928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.912999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.913244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.913317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.913513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.913576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.913824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.913896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.914176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.914246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.914443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.914496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.914723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.914797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.915032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.915101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.915331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.915384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.915573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.915627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.915815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.915894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.916115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.916169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.916356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.916409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.916622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.916676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.916872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.916923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.917143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.917197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.917399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.917451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.917699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.917771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.918022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.918093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.918297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.918350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.918610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.918681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.893 qpair failed and we were unable to recover it. 00:35:55.893 [2024-10-11 22:58:58.918829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.893 [2024-10-11 22:58:58.918883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.919174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.919245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.919458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.919509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.919754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.919828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.920027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.920101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.920287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.920340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.920538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.920604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.920893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.920974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.921150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.921203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.921384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.921437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.921604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.921658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.921910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.921979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.922213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.922284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.922482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.922534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.922791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.922863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.923154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.923225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.923433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.923485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.923739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.923813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.924010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.924083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.924304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.924355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.924521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.924587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.924809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.924883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.925087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.925159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.925334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.925387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.925629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.925682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.925901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.925953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.926171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.926224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.926419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.926471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.926647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.926703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.926952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.927024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.927234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.927286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.927460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.927512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.927709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.927782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.927997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.928069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.928293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.928347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.928566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.928619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.928817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.928890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.929131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.929203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.929414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.929466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.929720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.929793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.930094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.894 [2024-10-11 22:58:58.930149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.894 qpair failed and we were unable to recover it. 00:35:55.894 [2024-10-11 22:58:58.930357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.930408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.930630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.930708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.930964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.931036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.931209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.931284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.931501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.931563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.931759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.931833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.932072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.932133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.932342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.932394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.932622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.932699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.932942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.933015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.933256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.933337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.933591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.933645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.933842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.933915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.934100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.934172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.934331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.934384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.934574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.934629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.934905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.934975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.935190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.935266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.935487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.935539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.935758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.935810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.936039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.936092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.936295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.936349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.936608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.936664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.936910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.936982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.937217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.937290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.937492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.937545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.937766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.937837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.938079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.938152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.938329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.938383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.938574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.938629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.938878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.938949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.939199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.939271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.939445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.939500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.939825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.939902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.940113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.940166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.940369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.940421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.940594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.940650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.940892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.940964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.941171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.941225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.941436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.941488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.895 qpair failed and we were unable to recover it. 00:35:55.895 [2024-10-11 22:58:58.941709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.895 [2024-10-11 22:58:58.941783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.942033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.942104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.942323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.942375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.942572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.942626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.942787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.942839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.943022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.943074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.943236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.943289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.943494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.943546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.943710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.943763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.944003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.944074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.944280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.944331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.944510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.944577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.944864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.944943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.945195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.945264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.945430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.945484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.945750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.945821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.946106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.946177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.946343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.946395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.946586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.946641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.946803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.946858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.947122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.947194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.947402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.947456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.947643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.947716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.947925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.947997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.948191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.948245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.948404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.948456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.948643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.948722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.948960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.949033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.949192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.949243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.949388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.949440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.949676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.949751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.950038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.950110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.950312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.950364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.950578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.950639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.950823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.950907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.951073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.951126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.896 qpair failed and we were unable to recover it. 00:35:55.896 [2024-10-11 22:58:58.951337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.896 [2024-10-11 22:58:58.951389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.951616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.951691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.951931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.952009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.952251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.952304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.952465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.952517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.952714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.952766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.952976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.953048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.953262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.953315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.953522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.953604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.953868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.953939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.954185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.954257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.954513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.954582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.954782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.954854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.955135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.955207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.955417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.955470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.955776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.955850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.956088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.956160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.956326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.956381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.956592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.956646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.956932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.957003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.957224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.957295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.957535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.957599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.957853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.957926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.958068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.958120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.958350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.958403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.958615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.958694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.958847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.958902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.959119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.959192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.959378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.959430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.959655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.959726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.959965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.960035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.960269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.960321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.960572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.960626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.960846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.960918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.961211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.961281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.961528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.961604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.961889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.961960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.962239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.962320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.962539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.962607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.962843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.962895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.963186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.897 [2024-10-11 22:58:58.963257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.897 qpair failed and we were unable to recover it. 00:35:55.897 [2024-10-11 22:58:58.963464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.963516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.963757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.963811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.964051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.964123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.964401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.964473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.964785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.964857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.965140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.965212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.965423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.965475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.965924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.965979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.966220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.966293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.966535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.966619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.966880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.966951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.967207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.967277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.967525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.967595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.967826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.967898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.968131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.968204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.968461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.968513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.968756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.968825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.969128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.969198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.969446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.969499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.969731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.969785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.970067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.970136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.970428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.970500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.970676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.970729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.970994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.971067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.971321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.971374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.971594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.971649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.971895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.971966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.972251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.972321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.972562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.972615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.972852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.972923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.973076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.973129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.973339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.973409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.973652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.973728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.973924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.973994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.974154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.974207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.974451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.974503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.974808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.974901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.975198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.975269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.975476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.975529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.975784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.975857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.976144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.898 [2024-10-11 22:58:58.976214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.898 qpair failed and we were unable to recover it. 00:35:55.898 [2024-10-11 22:58:58.976455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.976508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.976796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.976867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.977162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.977233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.977431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.977483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.977684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.977753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.978038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.978110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.978353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.978423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.978682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.978753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.979035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.979105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.979313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.979365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.979574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.979627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.979905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.979976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.980262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.980332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.980528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.980590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.980788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.980841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.981001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.981053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.981259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.981311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.981476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.981530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.981716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.981769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.982046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.982117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.982357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.982410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.982692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.982765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.983013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.983085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.983330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.983383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.983574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.983643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.983880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.983955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.984229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.984300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.984502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.984564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.984758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.984829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.984988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.985043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.985243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.985314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.985566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.985620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.985881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.985953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.986204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.986276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.986479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.986532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.986787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.986866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.987159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.987228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.987430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.987482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.987736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.987808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.988097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.988168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.899 qpair failed and we were unable to recover it. 00:35:55.899 [2024-10-11 22:58:58.988413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.899 [2024-10-11 22:58:58.988465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.988732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.988804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.989042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.989111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.989350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.989422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.989691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.989763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.990019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.990090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.990376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.990447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.990678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.990757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.991036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.991107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.991331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.991384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.991615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.991693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.991908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.991977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.992168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.992240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.992476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.992529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.992816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.992885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.993156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.993226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.993470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.993522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.993821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.993891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.994119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.994190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.994396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.994451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.994701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.994773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.995054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.995127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.995348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.995401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.995598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.995651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.995844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.995922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.996124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.996196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.996388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.996440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.996675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.996746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.996959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.997031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.997230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.997282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.997490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.997543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.997830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.997908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.998145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.998216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.998455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.998507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.998782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.998855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.999076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.999154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.900 qpair failed and we were unable to recover it. 00:35:55.900 [2024-10-11 22:58:58.999363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.900 [2024-10-11 22:58:58.999418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:58.999681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:58.999755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.000035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.000106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.000336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.000388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.000614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.000691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.000950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.001022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.001303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.001374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.001540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.001602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.001885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.001956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.002243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.002312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.002518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.002583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.002868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.002941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.003221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.003293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.003512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.003581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.003769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.003843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.004074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.004145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.004446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.004516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.004760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.004832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.005121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.005193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.005434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.005486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.005774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.005829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.006114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.006185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.006385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.006437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.006694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.006765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.006955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.007032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.007311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.007381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.007637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.007709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.007978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.008049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.008273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.008344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.008547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.008610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.008866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.008919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.009197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.009267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.009508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.009572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.009858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.009931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.010178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.010249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.010450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.010502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.010769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.010839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.011122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.011193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.011364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.011416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.011684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.011766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.012023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.901 [2024-10-11 22:58:59.012094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.901 qpair failed and we were unable to recover it. 00:35:55.901 [2024-10-11 22:58:59.012295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.012372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.012586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.012639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.012883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.012953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.013236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.013306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.013562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.013616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.013804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.013876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.014159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.014230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.014429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.014482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.014708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.014783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.015065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.015136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.015408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.015479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.015783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.015854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.016072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.016142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.016342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.016397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.016625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.016702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.016944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.017014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.017290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.017359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.017638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.017692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.017892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.017944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.018153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.018206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.018404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.018457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.018653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.018706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.018912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.018964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.019120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.019173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.019412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.019465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.019775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.019846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.020112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.020182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.020371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.020423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.020658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.020730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.021023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.021093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.021333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.021385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.021602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.021655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.021847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.021920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.022193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.022262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.022461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.022513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.022819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.022889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.023131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.023201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.023409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.023462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.023750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.023830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.024113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.024184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.024413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.902 [2024-10-11 22:58:59.024467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.902 qpair failed and we were unable to recover it. 00:35:55.902 [2024-10-11 22:58:59.024734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.024794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.025082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.025154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.025393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.025445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.025731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.025805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.025994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.026064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.026220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.026272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.026435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.026489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.026789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.026860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.027046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.027118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.027354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.027406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.027676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.027748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.027997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.028049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.028253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.028308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.028567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.028621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.028906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.028977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.029255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.029326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.029526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.029594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.029792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.029865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.030153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.030223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.030429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.030482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.030795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.030867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.031130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.031183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.031340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.031392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.031544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.031613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.031909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.031980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.032200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.032272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.032505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.032570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.032825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.032893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.033178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.033248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.033443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.033496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.033772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.033843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.034093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.034165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.034373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.034426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.034692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.034763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.035034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.035104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.035362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.035434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.035672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.035746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.035938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.036017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.036304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.036375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.036615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.036693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.903 [2024-10-11 22:58:59.036926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.903 [2024-10-11 22:58:59.037001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.903 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.037283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.037354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.037535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.037619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.037871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.037941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.038215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.038285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.038488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.038540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.038810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.038883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.039143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.039196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.039434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.039487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.039702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.039773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.039993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.040047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.040335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.040407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.040604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.040658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.040903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.040975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.041237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.041290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.041527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.041590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.041879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.041950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.042248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.042318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.042522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.042590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.042798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.042870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.043053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.043128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.043363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.043435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.043664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.043736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.044017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.044088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.044319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.044371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.044632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.044707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.045003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.045073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.045307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.045359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.045620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.045692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.045950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.046020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.046242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.046315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.046525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.046590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.046826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.046896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.047171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.047243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.047490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.047541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.047788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.047859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.904 qpair failed and we were unable to recover it. 00:35:55.904 [2024-10-11 22:58:59.048142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.904 [2024-10-11 22:58:59.048212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.048450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.048511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.048771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.048843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.049124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.049196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.049350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.049405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.049643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.049716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.049890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.049962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.050225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.050296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.050500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.050562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.050845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.050921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.051146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.051216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.051453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.051505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.051768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.051839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.052132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.052204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.052412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.052466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.052743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.052814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.053048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.053118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.053359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.053429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.053629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.053702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.053898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.053971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.054255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.054307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.054500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.054562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.054779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.054850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.055143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.055214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.055450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.055502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.055715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.055787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.056032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.056102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.056335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.056388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.056591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.056644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.056858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.056931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.057164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.057236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.057453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.057505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.057770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.057842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.058084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.058156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.058367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.058422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.058629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.058707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.058910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.058980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.059245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.059318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.059526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.059590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.059804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.059876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.060072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.905 [2024-10-11 22:58:59.060142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.905 qpair failed and we were unable to recover it. 00:35:55.905 [2024-10-11 22:58:59.060381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.060442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.060688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.060759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.061037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.061108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.061308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.061361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.061538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.061601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.061841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.061912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.062170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.062244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.062418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.062470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.062686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.062759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.062984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.063054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.063261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.063314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.063510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.063591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.063777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.063830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.064007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.064059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.064249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.064302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.064489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.064542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.064735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.064788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.064981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.065033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.065233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.065285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.065489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.065544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.065732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.065783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.065937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.065989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.066207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.066260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.066459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.066511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.066704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.066756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.066923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.066976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.067192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.067244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.067436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.067489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.067750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.067804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.068009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.068081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.068271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.068325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.068484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.068537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.068735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.068807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.069089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.069160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.069360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.069412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.069596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.069651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.069872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.069924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.070172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.070245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.070439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.070492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.070694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.070769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.071016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.906 [2024-10-11 22:58:59.071078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.906 qpair failed and we were unable to recover it. 00:35:55.906 [2024-10-11 22:58:59.071323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.071375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.071536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.071599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.071765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.071819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.072073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.072145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.072389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.072441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.072645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.072722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.072910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.072986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.073191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.073244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.073401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.073453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.073696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.073769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.073962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.074038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.074238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.074292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.074455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.074510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.074753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.074806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.074989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.075042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.075247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.075298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.075472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.075525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.075716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.075769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.075954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.076006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.076160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.076213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.076425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.076478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.076661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.076715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.076920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.076973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.077217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.077269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.077449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.077502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.077704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.077759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.077995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.078067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.078258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.078311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.078482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.078537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.078753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.078825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.079055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.079127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.079327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.079379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.079585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.079638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.079878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.079950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.080154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.080208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.080413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.080465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.080707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.080779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.080967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.081044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.081206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.081258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.081493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.081564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.907 [2024-10-11 22:58:59.081789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.907 [2024-10-11 22:58:59.081842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.907 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.082121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.082192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.082364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.082416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.082642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.082715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.082967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.083020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.083196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.083249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.083459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.083511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.083737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.083790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.084070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.084143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.084313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.084366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.084606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.084685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.084928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.084998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.085158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.085209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.085400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.085452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.085683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.085756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.085977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.086030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.086280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.086332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.086542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.086606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.086845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.086917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.087126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.087198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.087404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.087457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.087645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.087701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.087960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.088031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.088305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.088378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.088625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.088704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.088989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.089060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.089271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.089325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.089518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.089597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.089839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.089917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.090071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.090124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.090370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.090423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.090591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.090645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.090934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.091008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.091252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.091323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.091527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.091589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.091845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.091915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.092165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.092233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.092462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.092514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.908 qpair failed and we were unable to recover it. 00:35:55.908 [2024-10-11 22:58:59.092760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.908 [2024-10-11 22:58:59.092837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.093085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.093156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.093375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.093427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.093589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.093644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.093895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.093949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.094132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.094204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.094372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.094425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.094626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.094705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.094955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.095008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.095170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.095223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.095460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.095512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.095736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.095788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.096008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.096077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.096261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.096312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.096509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.096573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.096826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.096903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.097156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.097230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.097397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.097450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.097715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.097790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.098083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.098156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.098329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.098382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.098589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.098643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.098896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.098969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.099244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.099314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.099509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.099574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.099856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.099932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.100125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.100198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.100403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.100455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.100735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.100797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.101030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.101103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.101324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.101376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.101638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.101715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.101969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.102041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.102257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.102329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.102580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.102634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.102859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.102928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.103188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.103241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.103392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.103446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.103694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.103766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.909 qpair failed and we were unable to recover it. 00:35:55.909 [2024-10-11 22:58:59.103990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.909 [2024-10-11 22:58:59.104061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.104257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.104328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.104487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.104539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.104815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.104887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.105095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.105166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.105374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.105426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.105589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.105644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.105947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.106000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.106219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.106290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.106491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.106543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.106791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.106862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.107083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.107153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.107362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.107415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.107581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.107635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.107881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.107953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.108145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.108216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.108396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.108450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.108629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.108704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.108944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.108997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.109161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.109215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.109407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.109459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.109704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.109777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.110020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.110090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.110291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.110344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.110566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.110619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.110809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.110888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.111102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.111174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.111387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.111439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.111633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.111712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.111952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.112033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.112204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.112259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.112478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.112530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.112749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.112821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.113109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.113182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.113388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.113440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.113701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.113755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.114030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.114102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.910 [2024-10-11 22:58:59.114305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.910 [2024-10-11 22:58:59.114357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.910 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.114562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.114615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.114848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.114922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.115137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.115209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.115424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.115477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.115709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.115782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.116045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.116116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.116295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.116347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.116601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.116654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.116827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.116900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.117148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.117219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.117435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.117486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.117753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.117831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.118079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.118152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.118371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.118423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.118642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.118717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.118975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.119048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.119249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.119301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.119539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.119600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.119849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.119922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.120165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.120218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.120426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.120478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.120698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.120770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.121028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.121083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.121333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.121404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.121675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.121747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.121938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.121990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.122145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.122197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.122383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.122435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.122678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.122751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.122990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.123062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.123236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.123290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.123457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.123520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.123735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.123806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.123985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.124057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.124303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.124355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.124582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.124635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.124923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.124994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.125248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.125301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.125512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.125593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.125799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.911 [2024-10-11 22:58:59.125884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.911 qpair failed and we were unable to recover it. 00:35:55.911 [2024-10-11 22:58:59.126186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.126257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.126463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.126515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.126752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.126826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.127022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.127097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.127305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.127357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.127579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.127633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.127839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.127892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.128149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.128219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.128379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.128431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.128659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.128731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.128969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.129046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.129226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.129278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.129472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.129525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.129754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.129825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.130080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.130134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.130336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.130390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.130607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.130660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.130874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.130926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.131105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.131156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.131363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.131416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.131699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.131754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.131998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.132069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.132240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.132294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.132477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.132530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.132744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.132815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.133024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.133077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.133261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.133314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.133572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.133625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.133838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.133890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.134089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.134141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.134302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.134354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.134534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.134607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.134818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.134872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.135076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.135128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.135376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.135428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.135724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.135797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.136045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.136114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.136329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.136382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.136590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.136644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.136835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.136913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.912 [2024-10-11 22:58:59.137122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.912 [2024-10-11 22:58:59.137193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.912 qpair failed and we were unable to recover it. 00:35:55.913 [2024-10-11 22:58:59.137412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.913 [2024-10-11 22:58:59.137465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.913 qpair failed and we were unable to recover it. 00:35:55.913 [2024-10-11 22:58:59.137750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.913 [2024-10-11 22:58:59.137822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.913 qpair failed and we were unable to recover it. 00:35:55.913 [2024-10-11 22:58:59.138041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.913 [2024-10-11 22:58:59.138114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.913 qpair failed and we were unable to recover it. 00:35:55.913 [2024-10-11 22:58:59.138355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.913 [2024-10-11 22:58:59.138408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.913 qpair failed and we were unable to recover it. 00:35:55.913 [2024-10-11 22:58:59.138647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.913 [2024-10-11 22:58:59.138720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.913 qpair failed and we were unable to recover it. 00:35:55.913 [2024-10-11 22:58:59.138961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.913 [2024-10-11 22:58:59.139034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.913 qpair failed and we were unable to recover it. 00:35:55.913 [2024-10-11 22:58:59.139268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.913 [2024-10-11 22:58:59.139320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.913 qpair failed and we were unable to recover it. 00:35:55.913 [2024-10-11 22:58:59.139474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.913 [2024-10-11 22:58:59.139526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:55.913 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.139790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.139862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.140114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.140167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.140410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.140462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.140689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.140761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.140986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.141058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.141314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.141366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.141513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.141590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.141798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.141869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.142137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.142208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.142417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.142469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.142725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.142798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.142998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.143071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.143248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.143300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.143541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.143607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.143816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.143868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.144107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.144159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.144366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.144417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.144613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.144667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.144959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.145031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.145317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.145387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.145640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.145713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.145926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.145979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.146194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.146282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.146520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.146584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.146819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.146892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.147120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.147189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.147394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.147449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.147690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.147743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.147955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.148027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.148309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.148379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.148664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.148735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.148951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.149007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.149258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.149329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.149506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.149585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.149836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.149908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.150098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.192 [2024-10-11 22:58:59.150169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.192 qpair failed and we were unable to recover it. 00:35:56.192 [2024-10-11 22:58:59.150412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.150464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.150629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.150682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.150930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.151001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.151278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.151349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.151622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.151697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.151948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.152020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.152239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.152290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.152462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.152514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.152738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.152793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.153062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.153134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.153345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.153397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.153645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.153719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.154003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.154072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.154291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.154344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.154511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.154573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.154821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.154894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.155135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.155205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.155413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.155464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.155707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.155779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.156029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.156101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.156306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.156358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.156567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.156621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.156878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.156932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.157165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.157236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.157437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.157489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.157758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.157830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.158087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.158168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.158386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.158437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.158672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.158744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.158977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.159049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.159331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.159400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.159603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.159656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.159870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.159941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.160225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.160293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.160453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.160506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.160748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.160819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.161061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.161130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.161364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.161416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.161681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.161753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.161992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.162045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.193 [2024-10-11 22:58:59.162293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.193 [2024-10-11 22:58:59.162347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.193 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.162564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.162620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.162851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.162921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.163153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.163225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.163468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.163520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.163823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.163894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.164148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.164202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.164452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.164504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.164745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.164799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.165054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.165126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.165417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.165488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.165758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.165828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.166079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.166152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.166419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.166471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.166776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.166846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.167040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.167116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.167331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.167403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.167591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.167645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.167845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.167919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.168200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.168271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.168504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.168565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.168809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.168880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.169106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.169176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.169415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.169467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.169751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.169823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.170023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.170095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.170374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.170453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.170694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.170766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.170991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.171045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.171200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.171254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.171471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.171523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.171785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.171854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.172087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.172159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.172367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.172422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.172648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.172719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.172968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.173040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.173292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.173345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.173520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.173598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.173881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.173952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.174179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.174252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.174472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.174528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.194 [2024-10-11 22:58:59.174830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.194 [2024-10-11 22:58:59.174900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.194 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.175121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.175192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.175437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.175490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.175730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.175803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.176075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.176146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.176366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.176418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.176691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.176763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.177048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.177119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.177331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.177384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.177654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.177726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.178019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.178089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.178296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.178348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.178635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.178708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.178958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.179010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.179242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.179294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.179546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.179613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.179858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.179932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.180215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.180285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.180527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.180595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.180860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.180913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.181176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.181246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.181484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.181536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.181851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.181920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.182208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.182278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.182522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.182590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.182885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.182965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.183249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.183321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.183518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.183587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.183778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.183830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.184064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.184136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.184287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.184340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.184522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.184588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.184827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.184900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.185157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.185229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.185401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.185455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.185733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.185805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.186096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.186167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.186403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.186456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.186682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.186754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.187048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.195 [2024-10-11 22:58:59.187118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.195 qpair failed and we were unable to recover it. 00:35:56.195 [2024-10-11 22:58:59.187365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.187418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.187586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.187641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.187927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.187998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.188259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.188313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.188538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.188600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.188874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.188927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.189212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.189283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.189482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.189534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.189731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.189804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.190060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.190130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.190357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.190427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.190670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.190707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.190833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.190873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.190995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.191030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.191170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.191207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.191368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.191420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.191617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.191657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.191836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.191871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.192049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.192083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.192204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.192238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.192382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.192417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.192568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.192603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.192720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.192756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.192870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.192905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.193076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.193110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.193227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.193269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.193422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.193458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.193632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.193667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.193787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.193822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.194038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.194089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.194320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.194356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.194508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.196 [2024-10-11 22:58:59.194544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.196 qpair failed and we were unable to recover it. 00:35:56.196 [2024-10-11 22:58:59.194720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.194754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.194944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.194978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.195109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.195143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.195279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.195313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.195453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.195487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.195629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.195664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.195808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.195868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.196068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.196134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.196342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.196395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.196538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.196613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.196749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.196783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.196929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.196964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.197108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.197142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.197314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.197348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.197446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.197481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.197593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.197628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.197774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.197809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.197913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.197948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.198078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.198112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.198334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.198385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.198671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.198724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.198904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.198972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.199228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.199293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.199602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.199637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.199782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.199815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.199923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.199956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.200128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.200162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.200307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.200341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.200577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.200631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.200750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.200786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.200932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.200966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.201175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.201247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.201489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.201541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.201728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.201773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.201917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.201952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.202151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.202225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.202436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.202471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.202590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.202626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.202771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.202806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.202921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.202954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.203122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.197 [2024-10-11 22:58:59.203157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.197 qpair failed and we were unable to recover it. 00:35:56.197 [2024-10-11 22:58:59.203340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.203393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.203538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.203580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.203702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.203736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.203897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.203948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.204079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.204113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.204262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.204321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.204479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.204530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.204687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.204725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.204855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.204891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.205046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.205098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.205333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.205385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.205617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.205652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.205799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.205833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.206035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.206098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.206358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.206421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.206677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.206711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.206859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.206893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.207071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.207128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.207292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.207326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.207619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.207653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.207767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.207800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.207965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.208029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.208331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.208404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.208681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.208715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.208914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.208979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.209218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.209282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.209532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.209616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.209719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.209752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.209889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.209925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.210144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.210207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.210489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.210606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.210729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.210763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.210887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.210926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.211114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.211186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.211436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.211499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.211688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.211721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.211877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.211910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.212078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.212144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.212402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.212465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.212693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.212728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.198 [2024-10-11 22:58:59.212895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.198 [2024-10-11 22:58:59.212930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.198 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.213186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.213250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.213460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.213524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.213721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.213754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.213865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.213898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.214065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.214128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.214389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.214449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.214737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.214772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.214947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.215013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.215327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.215399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.215671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.215706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.215837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.215892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.216153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.216217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.216489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.216533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.216717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.216752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.216945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.216998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.217247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.217323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.217647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.217693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.217802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.217870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.218138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.218202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.218423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.218474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.218720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.218754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.218892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.218947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.219254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.219319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.219583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.219641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.219799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.219835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.220085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.220149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.220361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.220426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.220623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.220659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.220812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.220881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.221068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.221143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.221474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.221523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.221687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.221729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.221856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.221918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.222186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.222250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.222541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.222589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.222754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.222788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.222968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.223021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.223245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.223310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.223625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.223662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.223852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.223933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.199 [2024-10-11 22:58:59.224171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.199 [2024-10-11 22:58:59.224225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.199 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.224495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.224601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.224762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.224796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.225021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.225085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.225389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.225454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.225712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.225748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.226003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.226068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.226377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.226447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.226683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.226718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.226843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.226879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.227028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.227063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.227361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.227423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.227712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.227748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.227934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.227998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.228244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.228317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.228624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.228661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.228856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.228917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.229177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.229241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.229577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.229649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.229799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.229851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.230143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.230213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.230480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.230546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.230725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.230761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.230892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.230927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.231027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.231062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.231208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.231243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.231476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.231512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.231655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.231690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.231826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.231888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.232093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.232152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.232388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.232445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.232727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.232792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.233110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.233185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.233443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.233507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.233779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.233843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.234063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.234127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.200 [2024-10-11 22:58:59.234419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.200 [2024-10-11 22:58:59.234483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.200 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.234778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.234843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.235142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.235216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.235473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.235537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.235778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.235844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.236098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.236161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.236439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.236475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.236632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.236668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.236832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.236895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.237199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.237264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.237570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.237635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.237933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.237996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.238238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.238289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.238464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.238499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.238690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.238759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.239012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.239087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.239330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.239394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.239644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.239711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.239974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.240039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.240299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.240363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.240581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.240646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.240940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.241016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.241318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.241392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.241660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.241726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.242016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.242080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.242368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.242432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.242740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.242776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.242924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.242957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.243220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.243285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.243587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.243662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.243925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.243990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.244261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.244326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.244516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.244594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.244890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.244966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.245228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.245263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.245401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.245437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.245735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.245801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.246014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.246077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.246284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.246350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.246618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.246684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.246937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.247001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.201 [2024-10-11 22:58:59.247253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.201 [2024-10-11 22:58:59.247316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.201 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.247602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.247667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.247956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.248019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.248248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.248312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.248568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.248633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.248926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.249001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.249244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.249279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.249452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.249487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.249757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.249822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.250039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.250104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.250339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.250375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.250520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.250564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.250747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.250803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.250950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.250987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.251250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.251314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.251600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.251667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.251924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.251989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.252249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.252313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.252606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.252682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.252897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.252961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.253152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.253216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.253429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.253503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.253783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.253846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.254139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.254203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.254500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.254592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.254842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.254905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.255192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.255256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.255486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.255521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.255715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.255786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.256073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.256137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.256336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.256402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.256700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.256769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.256986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.257052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.257340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.257405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.257694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.257759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.258027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.258092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.258374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.258437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.258684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.258750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.259044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.259107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.259322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.259388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.259680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.259755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.202 qpair failed and we were unable to recover it. 00:35:56.202 [2024-10-11 22:58:59.260064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.202 [2024-10-11 22:58:59.260129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.260431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.260503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.260778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.260842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.261143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.261218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.261510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.261590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.261905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.261977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.262229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.262293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.262609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.262685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.262982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.263045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.263309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.263374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.263608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.263673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.263887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.263951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.264242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.264306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.264505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.264581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.264797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.264861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.265143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.265207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.265461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.265524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.265808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.265872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.266072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.266138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.266443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.266516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.266781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.266857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.267152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.267216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.267470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.267533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.267869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.267932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.268179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.268243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.268499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.268582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.268801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.268864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.269097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.269161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.269429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.269493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.269740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.269804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.270083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.270118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.270293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.270369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.270590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.270655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.270897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.270962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.271260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.271325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.271629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.271694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.271945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.272011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.272301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.272365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.272649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.272714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.272966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.273030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.273323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.203 [2024-10-11 22:58:59.273387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.203 qpair failed and we were unable to recover it. 00:35:56.203 [2024-10-11 22:58:59.273628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.273693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.273992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.274027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.274133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.274168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.274359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.274424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.274720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.274786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.275093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.275161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.275448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.275512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.275794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.275859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.276152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.276215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.276515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.276595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.276884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.276948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.277189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.277256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.277577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.277642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.277899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.277966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.278229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.278264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.278399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.278434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.278678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.278743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.279038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.279102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.279430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.279492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.279725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.279801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.280098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.280171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.280369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.280433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.280692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.280728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.280851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.280887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.281101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.281165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.281458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.281521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.281797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.281864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.282156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.282219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.282455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.282520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.282798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.282834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.282987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.283022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.283205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.283273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.283522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.283618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.283892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.283956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.284253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.284327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.284585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.204 [2024-10-11 22:58:59.284651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.204 qpair failed and we were unable to recover it. 00:35:56.204 [2024-10-11 22:58:59.284955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.285028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.285325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.285389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.285692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.285767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.285974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.286039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.286306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.286369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.286667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.286732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.287027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.287090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.287331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.287395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.287695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.287771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.288044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.288107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.288421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.288495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.288802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.288866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.289191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.289255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.289569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.289634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.289928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.289991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.290234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.290300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.290616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.290682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.290973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.291036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.291322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.291357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.291538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.291605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.291868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.291932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.292180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.292243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.292536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.292619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.292924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.292999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.293289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.293352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.293619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.293684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.293970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.294034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.294332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.294404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.294695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.294761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.295055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.295130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.295388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.295452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.295787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.295865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.296153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.296218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.296507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.296596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.296907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.296971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.297277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.297350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.297604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.297668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.297989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 400564 Killed "${NVMF_APP[@]}" "$@" 00:35:56.205 [2024-10-11 22:58:59.298064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.298357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 [2024-10-11 22:58:59.298420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.205 [2024-10-11 22:58:59.298637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.205 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:56.205 [2024-10-11 22:58:59.298714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.205 qpair failed and we were unable to recover it. 00:35:56.206 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:56.206 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:56.206 [2024-10-11 22:58:59.298965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.299039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.299332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.299397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.299645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.299713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.299989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.300024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.300128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.300162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.300316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.300356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.300600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.300654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.300864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.300929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.301136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.301191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.301418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.301472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.301681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.301714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.301860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.301893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.302035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.302067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.302264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.302317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.302536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.302604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.302794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.302847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.303061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.303118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=401116 00:35:56.206 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:56.206 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 401116 00:35:56.206 [2024-10-11 22:58:59.303366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.303423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 401116 ']' 00:35:56.206 [2024-10-11 22:58:59.303633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.206 [2024-10-11 22:58:59.303688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:56.206 [2024-10-11 22:58:59.303906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.206 [2024-10-11 22:58:59.303960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:56.206 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.206 [2024-10-11 22:58:59.304223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.304279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.304496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.304573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.304749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.304781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.304938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.304972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.305092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.305125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.305311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.305344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.305622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.305656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.305809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.305865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.306059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.306115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.306285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.306351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.306595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.306650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.306833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.306887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.307077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.307130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.307357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.307409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.206 [2024-10-11 22:58:59.307655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.206 [2024-10-11 22:58:59.307707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.206 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.307888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.307939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.308150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.308213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.308436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.308488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.308662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.308713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.308887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.308967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.309274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.309337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.309547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.309611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.309790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.309877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.310108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.310172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.310404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.310454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.310731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.310796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.311050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.311114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.311405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.311467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.311672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.311723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.311900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.311980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.312184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.312247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.312444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.312495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.312706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.312770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.313058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.313121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.313352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.313404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.313614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.313681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.313956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.314020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.314208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.314287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.314467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.314518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.314771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.314835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.315124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.315187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.315420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.315470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.315671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.315735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.315934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.315997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.316244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.316307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.316607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.316673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.316934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.316998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.317257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.317319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.317509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.317591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.317800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.317879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.318074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.318157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.318404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.318454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.318687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.318752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.319041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.319107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.319298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.319351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.207 qpair failed and we were unable to recover it. 00:35:56.207 [2024-10-11 22:58:59.319573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.207 [2024-10-11 22:58:59.319653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.319875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.319939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.320132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.320211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.320459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.320509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.320833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.320934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.321251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.321326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.321544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.321638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.321813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.321894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.322203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.322269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.322567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.322620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.322861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.322926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.323147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.323212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.323436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.323487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.323743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.323809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.324101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.324166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.324342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.324393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.324583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.324638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.324874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.324926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.325178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.325231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.325433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.325485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.325739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.325799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.326064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.326120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.326325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.326383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.326602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.326659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.326854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.326909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.327119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.327174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.327414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.327472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.327669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.327725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.327888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.327942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.328203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.328257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.328484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.328539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.328786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.328841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.329117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.329171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.329352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.329406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.329633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.329708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.329933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.329988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.330208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.330264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.330448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.330504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.330713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.330768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.330980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.331034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.331213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.208 [2024-10-11 22:58:59.331274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.208 qpair failed and we were unable to recover it. 00:35:56.208 [2024-10-11 22:58:59.331529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.331628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.331827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.331883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.332056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.332111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.332302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.332357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.332588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.332645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.332835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.332890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.333087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.333142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.333371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.333429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.333650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.333707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.333880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.333938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.334124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.334180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.334354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.334409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.334621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.334678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.334911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.334968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.335184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.335241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.335432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.335488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.335685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.335742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.335923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.335978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.336194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.336249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.336412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.336466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.336673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.336731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.336931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.336988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.337193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.337248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.337431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.337486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.337725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.337782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.337972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.338027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.338200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.338256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.338436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.338491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.338683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.338738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.338933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.338988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.339144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.339199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.339382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.339437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.339616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.339672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.339850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.339914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.340084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.340140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.340339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.340394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.209 qpair failed and we were unable to recover it. 00:35:56.209 [2024-10-11 22:58:59.340589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.209 [2024-10-11 22:58:59.340646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.340866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.340923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.341129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.341183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.341351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.341408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.341663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.341721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.341907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.341987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.342202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.342268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.342459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.342548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.342866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.342930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.343174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.343252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.343480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.343545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.343881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.343948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.344257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.344321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.344543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.344619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.344836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.344893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.345114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.345169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.345348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.345406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.345663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.345720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.345888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.345944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.346103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.346159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.346410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.346464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.346700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.346757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.346979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.347034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.347219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.347275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.347491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.347547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.347765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.347820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.347994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.348049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.348238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.348292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.348544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.348611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.348878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.348936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.349187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.349242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.349447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.349502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.349744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.349801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.349967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.350023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.350251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.350307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.350571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.350647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.350838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.350893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.351166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.351231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.351467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.351532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.351855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.351911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.210 [2024-10-11 22:58:59.352094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.210 [2024-10-11 22:58:59.352124] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:35:56.210 [2024-10-11 22:58:59.352149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.210 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.352218] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:56.211 [2024-10-11 22:58:59.352409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.352463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.352675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.352729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.352908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.352963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.353168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.353221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.353407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.353466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.353775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.353819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.354043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.354086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.354292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.354337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.354516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.354578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.354784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.354830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.355015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.355061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.355277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.355342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.355600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.355666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.355949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.356014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.356260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.356327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.356542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.356623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.356872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.356937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.357198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.357264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.357467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.357531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.357815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.357881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.358180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.358245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.358461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.358525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.358800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.358865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.359099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.359163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.359425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.359490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.359783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.359849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.360087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.360151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.360356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.360421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.360713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.360779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.361077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.361141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.361396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.361461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.361720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.361785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.361989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.362051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.362274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.362338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.362571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.362636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.362846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.362921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.363505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.363601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.363899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.363966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.364073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.364103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.211 [2024-10-11 22:58:59.364233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.211 [2024-10-11 22:58:59.364264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.211 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.364392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.364423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.364534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.364575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.364680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.364710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.364854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.364884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.364991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.365020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.365114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.365143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.365240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.365270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.365373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.365402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.365505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.365535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.365681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.365711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.365814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.365843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.365934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.365965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.366080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.366110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.366245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.366274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.366407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.366437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.366542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.366581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.366679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.366708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.366805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.366835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.366937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.366967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.367059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.367088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.367217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.367247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.367369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.367399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.367497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.367528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.367691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.367739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.367837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.367869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.367973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.368003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.368112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.368143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.368249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.368279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.368416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.368445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.368534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.368575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.368676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.368706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.368886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.368949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.369210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.369274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.369508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.369610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.369742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.369772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.369950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.370035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.370288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.370353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.370619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.370649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.370780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.370809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.371033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.371099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.212 [2024-10-11 22:58:59.371313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.212 [2024-10-11 22:58:59.371377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.212 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.371597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.371654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.371788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.371817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.371918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.371947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.372036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.372065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.372220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.372283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.372592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.372622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.372727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.372757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.372854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.372883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.372991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.373020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.373213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.373275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.373482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.373512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.373646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.373677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.373780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.373809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.374040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.374103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.374327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.374386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.374612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.374642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.374735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.374764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.374935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.374994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.375196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.375253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.375497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.375586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.375680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.375709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.375804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.375833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.375975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.376005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.376133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.376185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.376387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.376450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.376688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.376718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.376826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.376856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.376981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.377010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.377225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.377283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.377492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.377595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.377729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.377759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.377868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.377897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.378172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.378231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.378446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.378504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.378665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.378694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.213 qpair failed and we were unable to recover it. 00:35:56.213 [2024-10-11 22:58:59.378828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.213 [2024-10-11 22:58:59.378858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.379032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.379090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.379315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.379373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.379548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.379587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.379693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.379722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.379850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.379885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.380083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.380142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.380302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.380369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.380607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.380638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.380741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.380771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.380943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.381002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.381268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.381325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.381503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.381574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.381709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.381738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.381846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.381875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.381974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.382003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.382096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.382126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.382274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.382332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.382575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.382628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.382751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.382781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.382880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.382910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.383063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.383092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.383260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.383318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.383487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.383516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.383633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.383663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.383768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.383797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.384027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.384086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.384339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.384407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.384632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.384662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.384770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.384801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.384956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.385016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.385282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.385341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.385608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.385639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.385737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.385767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.385855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.385921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.386104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.386133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.386260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.386289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.386526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.386610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.386712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.386741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.386835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.386865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.386973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.387002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.214 qpair failed and we were unable to recover it. 00:35:56.214 [2024-10-11 22:58:59.387210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.214 [2024-10-11 22:58:59.387267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.387450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.387508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.387754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.387796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.387994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.388052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.388291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.388348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.388515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.388593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.388877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.388921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.389126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.389184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.389417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.389476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.389758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.389817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.390040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.390084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.390303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.390360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.390568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.390627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.390822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.390903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.391126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.391170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.391313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.391357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.391573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.391637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.391863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.391911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.392070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.392096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.392195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.392221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.392307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.392332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.392425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.392450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.392541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.392574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.392689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.392714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.392830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.392855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.392950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.392977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.393119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.393144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.393262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.393288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.393425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.393450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.393577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.393604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.393690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.393716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.393804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.393830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.393947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.393972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.394055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.394081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.394170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.394195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.394281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.394306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.394387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.394414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.394532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.394563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.394645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.394670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.394766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.394791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.394901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.394931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.215 [2024-10-11 22:58:59.395022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.215 [2024-10-11 22:58:59.395047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.215 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.395125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.395151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.395244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.395269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.395364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.395390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.395544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.395575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.395689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.395715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.395830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.395855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.395970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.395995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.396076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.396102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.396195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.396220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.396311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.396336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.396436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.396462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.396541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.396572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.396660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.396685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.396806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.396832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.396945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.396970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.397053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.397078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.397191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.397216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.397330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.397356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.397442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.397469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.397580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.397606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.397747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.397773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.397865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.397890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.398008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.398034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.398147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.398173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.398265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.398291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.398383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.398409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.398495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.398520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.398637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.398664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.398751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.398776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.398895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.398920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.399034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.399059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.399138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.399163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.399250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.399275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.399357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.399383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.399472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.399499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.399628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.399663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.399781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.399807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.399939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.399965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.400041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.400066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.400159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.400185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-10-11 22:58:59.400301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-10-11 22:58:59.400326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.400421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.400447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.400588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.400614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.400697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.400723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.400805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.400830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.400946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.400971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.401059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.401084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.401197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.401223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.401323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.401349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.401493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.401518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.401612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.401638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.401723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.401748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.401845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.401871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.401965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.401991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.402114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.402139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.402249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.402274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.402361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.402388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.402508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.402533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.402661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.402687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.402806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.402831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.402913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.402939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.403027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.403052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.403163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.403188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.403279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.403305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.403394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.403419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.403514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.403540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.403637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.403666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.403754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.403779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.403904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.403930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.404023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.404048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.404130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.404155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.404232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.404258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.404379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.404404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.404530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.404562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.404675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.404702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.404818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.404844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.404932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.404957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-10-11 22:58:59.405076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-10-11 22:58:59.405102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.405185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.405210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.405330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.405355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.405453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.405479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.405581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.405607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.405704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.405730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.405852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.405878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.405957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.405983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.406066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.406092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.406182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.406207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.406301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.406326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.406423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.406448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.406530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.406562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.406661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.406686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.406778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.406803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.406913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.406938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.407017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.407046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.407134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.407160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.407264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.407289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.407380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.407406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.407499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.407524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.407638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.407664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.407745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.407771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.407862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.407887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.408007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.408032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.408151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.408177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.408291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.408317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.408429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.408454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.408610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.408636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.408757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.408782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.408904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.408928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.409017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.409042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.409124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.409149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.409262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.409287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.409373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.409398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.409511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.409536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.409634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.409659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.409775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.409800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.409911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.409936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.410057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.410082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.410220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.410245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.410365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-10-11 22:58:59.410391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-10-11 22:58:59.410501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.410526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.410644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.410669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.410789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.410814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.410908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.410934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.411013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.411038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.411132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.411157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.411242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.411267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.411355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.411382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.411465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.411490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.411614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.411640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.411733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.411758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.411839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.411864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.411944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.411970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.412087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.412113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.412204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.412229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.412347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.412373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.412456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.412481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.412621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.412647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.412787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.412812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.412907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.412932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.413045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.413071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.413180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.413205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.413288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.413315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.413409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.413435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.413524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.413555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.413640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.413666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.413777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.413802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.413882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.413907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.414024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.414050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.414163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.414188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.414267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.414293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.414437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.414463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.414578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.414603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.414690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.414715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.414807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.414833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.414921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.414946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.415064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.415090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.415182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.415209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.415295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.415320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.415485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.415510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.415601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-10-11 22:58:59.415628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-10-11 22:58:59.415718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.415743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.415856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.415885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.416027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.416052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.416168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.416194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.416299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.416324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.416405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.416430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.416547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.416578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.416662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.416687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.416778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.416803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.416878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.416903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.417021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.417047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.417152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.417177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.417262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.417287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.417392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.417417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.417534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.417566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.417655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.417681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.417758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.417784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.417858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.417883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.417970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.417995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.418107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.418134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.418218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.418243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.418401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.418427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.418502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.418528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.418617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.418643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.418759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.418785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.418899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.418925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.419020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.419047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.419122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.419148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.419228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.419258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.419404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.419429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.419508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.419534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.419653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.419680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.419766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.419792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.419902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.419927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.420050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.420076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.420182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.420207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.420298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.420323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.420418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.420444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.420559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.420587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.420692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.420718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.420836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.420861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-10-11 22:58:59.420968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-10-11 22:58:59.420994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.421116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.421141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.421230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.421255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.421367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.421392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.421505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.421530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.421650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.421677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.421752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.421777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.421860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.421884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.422001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.422026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.422113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.422140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.422226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.422253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.422336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.422361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.422449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.422474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.422570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.422596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.422683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.422712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.422806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.422832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.422914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.422940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.423019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.423044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.423129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.423154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.423230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.423255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.423370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.423381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:56.221 [2024-10-11 22:58:59.423395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.423528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.423559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.423651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.423676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.423792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.423817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.423896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.423921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.424039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.424064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.424153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.424179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.424289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.424314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.424431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.424456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.424559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.424585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.424673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.424698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.424784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.424810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.424889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.424915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.425055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.425080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.425161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.425186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.425268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.425293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.425384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.425409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.425495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.425520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-10-11 22:58:59.425611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-10-11 22:58:59.425637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.425742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.425767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.425858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.425883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.426004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.426034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.426181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.426206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.426347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.426372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.426462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.426487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.426575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.426602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.426675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.426701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.426841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.426867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.427087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.427112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.427199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.427224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.427305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.427331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.427413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.427438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.427559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.427585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.427693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.427719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.427845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.427870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.427982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.428007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.428120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.428146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.428254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.428280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.428424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.428449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.428590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.428616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.428714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.428740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.428857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.428883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.429007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.429032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.429160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.429186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.429272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.429297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.429411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.429437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.429560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.429585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.429662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.429687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.429795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.429824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.429942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.429968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.430058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.430083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.430191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.430216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.430302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.430327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.430438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.430464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.430541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.430573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.430650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.430676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.430772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.430798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-10-11 22:58:59.430886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-10-11 22:58:59.430912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.431006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.431032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.431151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.431177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.431288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.431313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.431397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.431422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.431541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.431610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.431723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.431748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.431835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.431860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.431975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.432001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.432082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.432108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.432224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.432249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.432325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.432351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.432449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.432475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.432568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.432593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.432711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.432737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.432824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.432849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.432958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.432984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.433100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.433126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.433269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.433294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.433385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.433410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.433503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.433529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.433623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.433649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.433729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.433755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.433893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.433919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.434036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.434062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.434146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.434171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.434289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.434315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.434411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.434436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.434556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.434582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.434726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.434751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.434864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.434889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.435002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.435027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.435143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.435169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.435309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.435334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.435424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.435449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.435564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.435596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.435758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.435793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.435893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.435929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.436034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.436060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.436149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.436174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.436284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.436310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.436424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-10-11 22:58:59.436450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-10-11 22:58:59.436540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.436584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.436707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.436733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.436852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.436878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.436958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.436983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.437082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.437108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.437192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.437218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.437313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.437338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.437417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.437442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.437523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.437548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.437679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.437706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.437801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.437827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.437952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.437980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.438094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.438122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.438216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.438243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.438368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.438395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.438509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.438536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.438669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.438695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.438782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.438814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.438930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.438956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.439043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.439068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.439154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.439179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.439300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.439326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.439415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.439441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.439562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.439588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.439679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.439708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.439801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.439830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.439928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.439955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.440050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.440083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.440170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.440196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-10-11 22:58:59.440279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-10-11 22:58:59.440304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.440418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.440444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.440567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.440594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.440685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.440710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.440825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.440851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.441001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.441037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.441139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.441176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.441286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.441321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.441431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.441465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.441616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.441650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.441786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.441820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.441923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.441956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.442088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.442122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.442224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.442266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.442375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.442408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.442527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.442573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.442701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.442733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.442859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.442895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.443000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.443036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.443153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.443186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.443296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.443339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.443474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.443501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.443614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.443641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.443758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.443784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.444681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.444712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.444849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.444875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.444953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.444980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.445120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.445147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.445261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.445286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.445418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.445445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.445575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.445602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.445716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.445742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.445820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.445846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.445976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.446003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.446117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.446144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.446234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.446260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.446342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.446367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.446467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.446493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.446644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.446670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.446788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.446814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.446938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.446964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.447054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.447080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.447197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.447226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.447325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.447351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.447438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.447464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.447598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.447624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.508 qpair failed and we were unable to recover it. 00:35:56.508 [2024-10-11 22:58:59.447710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.508 [2024-10-11 22:58:59.447737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.447820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.447846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.447931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.447957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.448046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.448073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.448187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.448213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.448328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.448354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.448446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.448472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.448568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.448595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.448713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.448740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.448827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.448860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.448966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.448992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.449076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.449102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.449221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.449246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.449329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.449354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.449438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.449465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.449568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.449597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.449716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.449742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.449861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.449886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.449965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.449991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.450081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.450107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.450185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.450211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.450366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.450392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.450481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.450506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.450609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.450636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.450717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.450743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.450864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.450890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.451006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.451032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.451145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.451171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.451319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.451345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.451429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.451456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.451600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.451628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.451720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.451746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.451867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.451893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.451983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.452009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.452096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.452121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.452239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.452265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.452386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.452412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.452487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.452517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.452600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.452626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.452714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.452740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.452851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.452876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.452961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.452988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.453076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.453103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.453246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.453277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.453405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.453431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.453516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.453542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.453674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.453700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.453816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.453841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.453960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.453986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.454102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.454128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.454220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.454245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.454342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.454371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.454484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.454510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.454612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.509 [2024-10-11 22:58:59.454638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.509 qpair failed and we were unable to recover it. 00:35:56.509 [2024-10-11 22:58:59.454728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.454753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.454864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.454890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.454979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.455005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.455130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.455156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.455281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.455307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.455392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.455418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.455504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.455529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.455655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.455681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.455793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.455820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.455965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.456007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.456112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.456147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.456259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.456298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.456429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.456465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.456576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.456606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.456750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.456778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.456900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.456937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.456991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223a260 (9): Bad file descriptor 00:35:56.510 [2024-10-11 22:58:59.457114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.457141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.457260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.457286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.457364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.457390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.457495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.457521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.457612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.457639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.457727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.457753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.457870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.457895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.458019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.458049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.458172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.458198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.458325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.458351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.458469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.458495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.458578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.458605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.458723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.458749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.458841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.458867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.458960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.458993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.459121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.459147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.459230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.459256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.459339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.459364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.459470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.459507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.459632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.459658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.459738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.459763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.459851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.459879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.459994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.460020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.460133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.460158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.460304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.460329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.460413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.460439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.460525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.460557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.460645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.460671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.460760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.460785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.460906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.460932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.461050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.461077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.461188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.461213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.461298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.461323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.461410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.461436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.461597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.461628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.461743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.461769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.461884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.461909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.510 qpair failed and we were unable to recover it. 00:35:56.510 [2024-10-11 22:58:59.461993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.510 [2024-10-11 22:58:59.462019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.462103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.462128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.462207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.462233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.462341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.462366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.462447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.462472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.462591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.462618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.462726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.462752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.462871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.462896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.462989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.463014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.463129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.463154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.463232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.463257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.463339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.463364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.463477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.463504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.463609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.463648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.463775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.463803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.463895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.463921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.464007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.464032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.464149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.464176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.464318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.464344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.464425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.464452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.464576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.464603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.464716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.464742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.464890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.464915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.464999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.465025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.465782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.465819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.465960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.465987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.466083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.466109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.466193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.466219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.466333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.466358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.466450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.466476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.466595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.466623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.466704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.466730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.466875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.466902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.467020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.467047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.467132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.467157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.467265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.467291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.467405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.467432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.467514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.467541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.467651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.467678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.467761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.467787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.467891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.467927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.468075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.468101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.468182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.468208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.468345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.468371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.468512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.468538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.468663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.468690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.468814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.468854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.511 qpair failed and we were unable to recover it. 00:35:56.511 [2024-10-11 22:58:59.468954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.511 [2024-10-11 22:58:59.468982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.469067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.469093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.469175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.469200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.469314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.469340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.469460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.469486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.469615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.469642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.469732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.469758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.469919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.469945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.470073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.470098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.470191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.470218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.470334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.470362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.470491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.470521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.470637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.470664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.470776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.470803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.470934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.470961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.471105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.471132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.471253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.471280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.471370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.471397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.471557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.471584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.471668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.471694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.471772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.471798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.471898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.471925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.472009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.472034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.472163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.472192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.472315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.472342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.472465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.472491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.472586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.472613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.472726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.472753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.472881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.472907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.472986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.473012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.473102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.473128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.473215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.473241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.473359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.473387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.473468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.473494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.473593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.473620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.473706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.473732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.473859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.473885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.473963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.473993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.474075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.474101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.474181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.474207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.474291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.474321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.474403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.474429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.474517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.474543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.474601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:56.512 [2024-10-11 22:58:59.474636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:56.512 [2024-10-11 22:58:59.474637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.474651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:56.512 [2024-10-11 22:58:59.474661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 [2024-10-11 22:58:59.474669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.474681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:56.512 [2024-10-11 22:58:59.474776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.474800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.474924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.474949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.475058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.475084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.475176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.475203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.475311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.475340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.475429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.475454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.475564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.475590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.475704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.475730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.475818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.475843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.475929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.475955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.476035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.512 [2024-10-11 22:58:59.476061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.512 qpair failed and we were unable to recover it. 00:35:56.512 [2024-10-11 22:58:59.476158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.476184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.476275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.476302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 [2024-10-11 22:58:59.476266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.476322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:56.513 [2024-10-11 22:58:59.476435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.476372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:56.513 [2024-10-11 22:58:59.476460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 [2024-10-11 22:58:59.476376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.476560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.476585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.476702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.476728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.476814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.476840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.476922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.476947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.477040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.477068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.477157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.477184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.477294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.477320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.477401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.477427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.477540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.477576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.477661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.477686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.477777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.477804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.477894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.477932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.478017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.478043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.478154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.478181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.478256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.478282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.478374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.478399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.478473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.478499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.478590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.478617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.478704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.478731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.478820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.478846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.478965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.478991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.479088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.479125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.479203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.479229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.479344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.479370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.479502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.479528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.479650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.479678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.479770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.479797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.479907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.479935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.480032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.480058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.480141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.480166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.480244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.480271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.480357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.480385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.480464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.480490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.480596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.480623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.480735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.480761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.480909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.480935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.481059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.481085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.481169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.481200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.481311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.481337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.481416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.481442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.481582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.481611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.481698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.481724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.481832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.481865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.481946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.481971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.482056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.482081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.482164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.482190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.482266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.482292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.482394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.482421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.482501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.482527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.482655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.482680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.482765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.482791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.482888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.513 [2024-10-11 22:58:59.482914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.513 qpair failed and we were unable to recover it. 00:35:56.513 [2024-10-11 22:58:59.483011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.483036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.483159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.483186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.483275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.483304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.483418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.483444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.483520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.483545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.483640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.483666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.483759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.483786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.483927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.483954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.484038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.484064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.484146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.484172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.484288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.484314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.484392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.484417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.484505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.484532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.484626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.484652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.484740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.484767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.484861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.484886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.485016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.485042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.485125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.485151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.485228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.485253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.485330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.485356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.485461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.485487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.485615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.485642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.485772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.485797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.485889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.485914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.486041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.486074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.486159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.486189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.486276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.486302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.486387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.486413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.486535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.486570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.486648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.486674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.486759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.486785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.486910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.486937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.487057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.487083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.487168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.487194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.487285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.487311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.487394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.487420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.487513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.487539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.487640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.487667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.487743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.487769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.487885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.487911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.487993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.488019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.488105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.488132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.488219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.488253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.488346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.488372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.488451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.488477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.488570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.488597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.488687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.488714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.488798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.488823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.488914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.488941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.489036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.489067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.489147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.489173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.489261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.489288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.489379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.489409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.514 [2024-10-11 22:58:59.489505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.514 [2024-10-11 22:58:59.489566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.514 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.489690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.489717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.489797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.489823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.489914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.489940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.490019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.490045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.490128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.490154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.490265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.490299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.490377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.490402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.490500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.490526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.490633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.490659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.490770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.490795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.490909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.490935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.491012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.491038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.491138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.491164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.491246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.491272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.491346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.491371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.491457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.491487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.491588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.491614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.491725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.491750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.491832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.491858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.491946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.491973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.492053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.492079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.492171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.492197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.492302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.492327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.492410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.492435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.492546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.492580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.492671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.492697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.492780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.492807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.492928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.492953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.493047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.493073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.493153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.493184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.493298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.493324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.493400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.493425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.493517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.493571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.493670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.493710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.493813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.493841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.494054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.494091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.494212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.494238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.494341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.494369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.494465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.494498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.494612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.494643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.494741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.494771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.494865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.494899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.495018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.495048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.495153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.495180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.495291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.495317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.495403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.495429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.495529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.495568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.495654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.495689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.495892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.495918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.496010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.496037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.496147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.496173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.496301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.496333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.496418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.496443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.496534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.496580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.496662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.496688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.496768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.496794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.496926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.496952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.497041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.497067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.497194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.497220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.497420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.515 [2024-10-11 22:58:59.497447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.515 qpair failed and we were unable to recover it. 00:35:56.515 [2024-10-11 22:58:59.497536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.497581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.497677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.497704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.497798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.497825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.497926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.497961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.498055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.498081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.498166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.498196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.498279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.498305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.498391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.498417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.498497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.498523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.498633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.498673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.498828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.498855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.498935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.498962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.499056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.499084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.499181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.499207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.499339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.499378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.499479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.499506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.499610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.499637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.499834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.499860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.499945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.499971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.500054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.500081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.500198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.500224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.500307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.500336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.500430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.500457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.500540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.500585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.500672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.500698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.500792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.500819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.500910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.500936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.501028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.501054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.501147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.501172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.501297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.501322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.501428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.501454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.501571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.501598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.501676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.501708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.501790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.501815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.501938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.501963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.502057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.502084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.502165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.502191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.502273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.502298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.502378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.502403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.502491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.502519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.502638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.502664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.502749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.502774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.502893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.502918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.503006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.503031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.503112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.503137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.503252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.503278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.503382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.503409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.503525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.503557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.503661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.503688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.503776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.503801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.503892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.503917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.504003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.504031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.504114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.504139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.504227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.504252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.504336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.504363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.504452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.504478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.504565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.504592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.504675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.504702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.504785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.504810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.504903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.504934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.516 [2024-10-11 22:58:59.505029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.516 [2024-10-11 22:58:59.505057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.516 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.505162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.505188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.505315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.505341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.505456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.505482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.505579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.505605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.505687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.505714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.505792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.505818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.505906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.505932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.506013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.506043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.506130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.506155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.506245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.506271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.506349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.506375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.506447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.506473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.506597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.506623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.506731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.506758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.506834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.506859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.506936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.506962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.507088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.507114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.507201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.507226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.507315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.507341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.507430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.507455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.507607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.507633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.507721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.507746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.507832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.507858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.507941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.507968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.508077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.508102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.508189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.508219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.508301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.508326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.508404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.508430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.508592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.508619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.508700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.508727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.508821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.508846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.508922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.508951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.509045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.509070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.509166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.509192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.509283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.509308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.509392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.509418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.509528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.509561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.509679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.509704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.509784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.509809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.509911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.509937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.510027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.510053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.510128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.510154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.510248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.510285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.510376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.510401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.510477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.510503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.510633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.510659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.510783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.510809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.510896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.510922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.511011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.511046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.511169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.511194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.511316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.511341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.511428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.511455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.511527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.511572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.511652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.511677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.511768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.511794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.511888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.517 [2024-10-11 22:58:59.511915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.517 qpair failed and we were unable to recover it. 00:35:56.517 [2024-10-11 22:58:59.512008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.512033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.512117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.512144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.512293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.512336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.512445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.512484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.512614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.512644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.512731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.512757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.512876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.512902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.512997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.513024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.513158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.513185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.513277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.513303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.513395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.513421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.513496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.513531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.513628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.513653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.513772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.513799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.513906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.513932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.514021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.514046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.514126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.514152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.514229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.514257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.514345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.514371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.514486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.514512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.514620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.514647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.514731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.514757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.514858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.514884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.514976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.515008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.515095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.515123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.515211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.515237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.515326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.515353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.515476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.515514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.515622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.515650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.515772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.515800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.515893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.515925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.516049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.516075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.516160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.516187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.516275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.516300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.516383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.516409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.516504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.516531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.516622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.516648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.516745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.516772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.516854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.516891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.516981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.517008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.517102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.517128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.517209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.517237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.517320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.517348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.517429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.517456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.517579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.517607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.517686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.517712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.517800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.517827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.517943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.517969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.518099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.518126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.518211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.518238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.518332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.518370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.518461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.518487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.518581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.518607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.518695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.518721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.518799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.518826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.518915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.518945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.519040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.519068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.519152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.519178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.519300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.519326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.519453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.519479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.519598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.519625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.519706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.518 [2024-10-11 22:58:59.519731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.518 qpair failed and we were unable to recover it. 00:35:56.518 [2024-10-11 22:58:59.519811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.519836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.519922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.519948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.520047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.520074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.520150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.520184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.520271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.520297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.520411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.520436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.520514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.520540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.520664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.520691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.520788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.520814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.520928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.520953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.521041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.521076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.521164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.521190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.521268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.521293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.521370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.521397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.521516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.521557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.521642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.521668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.521744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.521769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.521887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.521913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.522005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.522030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.522108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.522133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.522224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.522249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.522360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.522386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.522484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.522510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.522617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.522644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.522732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.522758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.522836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.522868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.522964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.522989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.523087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.523112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.523188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.523218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.523333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.523359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.523436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.523461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.523564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.523590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.523679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.523704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.523793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.523819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.523905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.523930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.524010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.524042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.524125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.524151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.524256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.524282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.524370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.524397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.524492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.524518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.524607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.524633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.524727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.524753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.524892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.524940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.525035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.525063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.525217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.525244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.525329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.525356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.525436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.525462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.525545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.525576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.525663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.525688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.525762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.525788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.525882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.525908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.525993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.526020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.526109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.526135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.526253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.526279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.526408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.526434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.526512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.526541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.526653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.526680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.526757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.526783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.526902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.526927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.527005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.527030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.527104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.527130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.527261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.527287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.527369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.527396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.527478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.527503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.527614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.527653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.519 qpair failed and we were unable to recover it. 00:35:56.519 [2024-10-11 22:58:59.527751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.519 [2024-10-11 22:58:59.527780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.527870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.527897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.528057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.528083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.528194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.528221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.528311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.528339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.528425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.528453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.528544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.528578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.528651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.528677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.528759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.528784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.528876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.528903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.529034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.529063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.529159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.529186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.529313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.529339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.529460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.529486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.529580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.529606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.529693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.529718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.529803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.529828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.529946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.529977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.530066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.530093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.530188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.530224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.530300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.530328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.530420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.530447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.530530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.530568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.530682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.530708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.530794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.530821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.530923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.530948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.531076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.531101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.531186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.531212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.531289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.531315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.531393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.531428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.531546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.531577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.531696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.531723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.531831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.531861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.531942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.531967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.532043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.532071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.532180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.532206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.532354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.532391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.532490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.532517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.532615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.532642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.532729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.532756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.532835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.532863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.532972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.532998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.533110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.533136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.533215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.533248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.533352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.533380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.533495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.533522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.533631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.533658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.533740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.533765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.533922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.533948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.534044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.534070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.534160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.534186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.534301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.534327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.534416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.534442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.534532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.534574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.534674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.534700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.534787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.534813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.534902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.534927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.535020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.535050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.535150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.520 [2024-10-11 22:58:59.535177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.520 qpair failed and we were unable to recover it. 00:35:56.520 [2024-10-11 22:58:59.535261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.535287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.535368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.535394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.535484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.535523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.535650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.535677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.535764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.535791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.535872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.535908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.535988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.536014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.536100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.536129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.536213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.536239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.536330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.536360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.536450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.536476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.536576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.536604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.536698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.536724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.536807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.536834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.536922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.536947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.537063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.537089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.537179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.537205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.537329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.537354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.537470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.537497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.537593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.537620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.537698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.537724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.537808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.537833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.537939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.537965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.538080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.538106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.538199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.538224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.538339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.538370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.538446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.538472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.538576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.538604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.538691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.538718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.538802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.538829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.538959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.538985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.539108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.539135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.539279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.539305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.539502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.539527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.539622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.539648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.539769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.539795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.539918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.539944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.540041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.540067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.540153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.540179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.540262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.540288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.540406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.540432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.540521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.540575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.540660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.540688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.540774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.540800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.540928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.540954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.541045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.541072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.541188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.541214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.541299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.541325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.541409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.541443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.541525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.541562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.541647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.541674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.541756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.541782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.541905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.541931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.542028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.542057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.542143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.542168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.542288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.542314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.542394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.542421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.542498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.542524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.542623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.542649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.542740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.542766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.542844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.542872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.542976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.543001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.543141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.543168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.543287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.543312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.543423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.543449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.543529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.521 [2024-10-11 22:58:59.543584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.521 qpair failed and we were unable to recover it. 00:35:56.521 [2024-10-11 22:58:59.543664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.543690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.543782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.543809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.543963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.543989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.544061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.544087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.544187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.544214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.544303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.544329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.544413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.544439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.544517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.544555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.544665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.544691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.544790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.544829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.544931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.544965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.545091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.545118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.545200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.545226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.545319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.545345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.545429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.545455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.545545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.545579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.545662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.545688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.545771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.545797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.545877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.545903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.546018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.546043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.546152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.546178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.546268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.546295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.546379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.546404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.546488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.546515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.546610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.546636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.546747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.546773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.546853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.546884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.546981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.547013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.547106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.547132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.547244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.547273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.547384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.547410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.547512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.547548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.547660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.547686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.547798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.547823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.547913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.547938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.548051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.548078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.548165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.548193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.548291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.548317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.548397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.548423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.548499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.548525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.548671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.548697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.548791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.548817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.548904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.548930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.549012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.549037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.549129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.549155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.549251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.549276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.549391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.549416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.549502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.549531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.549645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.549684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.549788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.549817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.549906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.549932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.550015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.550041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.550135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.550162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.550243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.550270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.550393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.550432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.550521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.550568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.550655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.550682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.550761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.550788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.550884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.550910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.551055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.551080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.551165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.522 [2024-10-11 22:58:59.551192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.522 qpair failed and we were unable to recover it. 00:35:56.522 [2024-10-11 22:58:59.551282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.551309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.551419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.551447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.551521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.551546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.551654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.551681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.551767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.551794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.551890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.551932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.552048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.552074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.552153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.552179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.552278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.552306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.552389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.552416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.552501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.552527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.552642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.552668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.552760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.552786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.552898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.552932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.553021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.553047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.553125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.553152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.553264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.553291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.553375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.553403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.553511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.553539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.553648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.553674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.553760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.553785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.553885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.553911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.553994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.554021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.554109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.554134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.554230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.554258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.554387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.554414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.554487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.554513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.554607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.554634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.554722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.554750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.554832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.554867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.554952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.554978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.555092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.555118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.555242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.555273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.555364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.555390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.555486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.555513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.555633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.555673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.555798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.555826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.555919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.555946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.556038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.556064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.556147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.556173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.556257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.556283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.556368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.556394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.556486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.556511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.556615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.556643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.556744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.556770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.556865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.556901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.556991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.557015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.557103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.557129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.557240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.557265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.557349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.557377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.557466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.557493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.557597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.557624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.557706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.557732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.557819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.557844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.557941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.557967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.558057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.558083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.558177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.558204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.558288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.558314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.558407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.558432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.558526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.558559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.558641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.558666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.558746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.558772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.558856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.558885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.558978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.559003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.559088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.559116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.559203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.559227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.559340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.523 [2024-10-11 22:58:59.559367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.523 qpair failed and we were unable to recover it. 00:35:56.523 [2024-10-11 22:58:59.559454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.559480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.559606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.559634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.559713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.559739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.559821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.559847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.559974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.559999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.560080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.560111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.560205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.560239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.560323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.560348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.560474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.560499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.560608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.560634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.560713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.560738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.560822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.560847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.560945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.560972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.561059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.561084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.561168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.561194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.561303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.561330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.561403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.561428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.561508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.561534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.561639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.561665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.561746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.561773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.561867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.561892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.561984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.562009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.562090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.562115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.562200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.562225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.562312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.562337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.562452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.562477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.562573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.562598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.562683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.562708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.562794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.562820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.562901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.562934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.563014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.563040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.563128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.563154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.563239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.563269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.563353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.563380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.563460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.563485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.563610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.563637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.563717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.563743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.563836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.563872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.563961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.563987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.564076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.564103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.564198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.564226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.564313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.564339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.564420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.564445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.564535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.564580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.564665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.564691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.564772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.564798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.564891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.564916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.565001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.565027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.565137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.565162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.565248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.565276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.565358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.565383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.565486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.565525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.565629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.565656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.565736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.565762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.565852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.565886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.565973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.565999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.566073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.566100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.566184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.566212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.566333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.566361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.566452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.566489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.566591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.566618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.566704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.566729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.566815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.566841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.566931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.566958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.567043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.567069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.567171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.567196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.567279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.567305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.524 [2024-10-11 22:58:59.567380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.524 [2024-10-11 22:58:59.567405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.524 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.567490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.567515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.567610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.567636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.567710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.567735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.567813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.567839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.567930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.567955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.568086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.568114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.568200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.568227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.568317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.568344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.568437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.568462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.568560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.568586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.568667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.568692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.568776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.568801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.568885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.568911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.569006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.569032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.569121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.569149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.569251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.569277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.569365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.569393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.569480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.569506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.569620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.569646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.569733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.569759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.569840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.569866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.569951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.569976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.570053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.570078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.570161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.570188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.570288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.570313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.570397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.570422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.570499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.570524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.570613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.570638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.570718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.570744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.570838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.570868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.570952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.570977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.571058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.571083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.571172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.571197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.571273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.571298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.571385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.571413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.571501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.571528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.571628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.571653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.571736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.571761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.571841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.571867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.571963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.571988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.572074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.572101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.572186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.572213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.572290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.572315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.572397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.572423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.572501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.572526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.572639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.572664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.572755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.572781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.572883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.572910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.572996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.573021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.573110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.573136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.573213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.573238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.573323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.573349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.573429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.573456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.573536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.573574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.573664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.573689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.573769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.573795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.573881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.573906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.573993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.574019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.574095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.574125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.574208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.574235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.574318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.574343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.574424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.574449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.574540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.574573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.574663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.574688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.574767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.574792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.574886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.574914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.525 qpair failed and we were unable to recover it. 00:35:56.525 [2024-10-11 22:58:59.575013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.525 [2024-10-11 22:58:59.575039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.575129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.575157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.575239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.575264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.575349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.575374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.575451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.575476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.575572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.575599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.575698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.575724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.575805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.575831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.575920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.575946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.576026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.576052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.576129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.576155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.576234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.576261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.576350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.576374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.576461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.576489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.576583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.576609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.576696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.576722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.576803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.576828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.576922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.576947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.577025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.577051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.577134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.577170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.577255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.577281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.577361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.577388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.577479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.577504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.577605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.577632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.577723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.577747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.577835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.577860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.577944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.577970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.578058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.578083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.578174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.578202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.578285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.578311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.578393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.578419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.578501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.578526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.578624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.578665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.578785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.578825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.578946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.578975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.579054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.579081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.579174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.579201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.579285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.579311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.579402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.579429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.579507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.579533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.579622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.579649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.579735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.579761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.579851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.579877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.579957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.579983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.580062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.580088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.580177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.580207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.580301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.580328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.580427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.580453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.580554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.580581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.580667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.580692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.580773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.580800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.580893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.580918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.581004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.581030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.581112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.581138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.581236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.581272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.581358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.581384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.581464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.581489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.581579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.581607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.581693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.581718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.581796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.581826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.581916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.581940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.582018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.582043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.582120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.582145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.582230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.582254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.582335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.582360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.582441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.582466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.582562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.582587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.582665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.582690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.582766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.582792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.582882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.582908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.582988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.526 [2024-10-11 22:58:59.583013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.526 qpair failed and we were unable to recover it. 00:35:56.526 [2024-10-11 22:58:59.583096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.583121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.583204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.583229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.583312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.583337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.583413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.583439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.583520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.583547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.583641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.583670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.583752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.583778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.583856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.583881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.583961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.583986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.584067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.584092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.584169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.584195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.584278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.584303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.584387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.584412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.584498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.584524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.584623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.584648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.584728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.584754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.584836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.584861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.584959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.584985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.585073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.585098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.585182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.585209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.585289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.585314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.585412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.585450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.585539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.585575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.585660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.585686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.585775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.585800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.585885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.585911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.585999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.586024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.586100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.586126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.586243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.586277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.586356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.586382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.586456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.586481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.586580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.586607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.586695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.586720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.586803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.586829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.586929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.586955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.587064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.587089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.587169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.587195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.587281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.587306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.587383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.587409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.587485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.587510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.587609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.587635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.587720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.587746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.587857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.587884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.587971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.587996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.588078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.588104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.588198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.588237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.588331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.588358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.588449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.588477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.588583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.588610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.588691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.588716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.588802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.588829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.588909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.588935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.589013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.589039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.589120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.589146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.589219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.589245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.589358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.589384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.589458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.589484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.589587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.589614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.589693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.589719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.589819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.589844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.589928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.589953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.590040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.590066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.590150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.590176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.590262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.590287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.590368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.590394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.590479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.590505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.527 [2024-10-11 22:58:59.590620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.527 [2024-10-11 22:58:59.590647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.527 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.590735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.590761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.590852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.590882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.590965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.590991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.591072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.591098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.591184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.591210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.591301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.591329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.591432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.591471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.591574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.591602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.591690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.591717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.591830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.591856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.591949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.591974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.592065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.592091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.592177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.592203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.592278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.592303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.592396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.592433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.592540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.592576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.592664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.592689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.592771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.592796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.592883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.592908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.592991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.593023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.593113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.593139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.593223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.593255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.593350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.593377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.593488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.593514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.593624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.593660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.593747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.593774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.593886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.593912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.593993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.594019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:56.528 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:56.528 [2024-10-11 22:58:59.594210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.594237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:56.528 [2024-10-11 22:58:59.594323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.594349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:56.528 [2024-10-11 22:58:59.594460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.594486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.528 [2024-10-11 22:58:59.594590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.594618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.594706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.594732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.594810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.594835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.594936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.594961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.595050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.595078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.595165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.595204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.595292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.595318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.595406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.595432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.595513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.595540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.595635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.595661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.595741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.595766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.595875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.595900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.595989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.596014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.596093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.596118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.596212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.596240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.596329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.596355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.596443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.596471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.596592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.596618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.596699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.596724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.596803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.596835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.596923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.596948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.597030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.597056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.597173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.597200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.597284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.597309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.597388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.597414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.597496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.597522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.597612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.597640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.597715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.597740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.597822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.597849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.597969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.597995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.598078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.598104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.598199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.598225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.598299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.598325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.598418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.598444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.598528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.598563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.598648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.598679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.598760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.598785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.598880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.598914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.598995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.599020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.528 [2024-10-11 22:58:59.599097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.528 [2024-10-11 22:58:59.599122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.528 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.599204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.599230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.599323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.599352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.599476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.599502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.599603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.599629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.599702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.599727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.599806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.599831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.599913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.599939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.600018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.600042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.600127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.600152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.600273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.600299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.600393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.600420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.600512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.600537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.600636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.600663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.600750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.600775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.600869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.600894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.600972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.600997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.601084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.601111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.601193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.601219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.601299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.601335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.601449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.601473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.601569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.601595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.601677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.601702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.601782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.601809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.601900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.601926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.602017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.602056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.602158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.602184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.602287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.602326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.602456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.602483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.602577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.602602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.602685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.602711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.602791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.602817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.602908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.602932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.603015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.603040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.603172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.603198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.603288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.603314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.603398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.603430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.603513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.603537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.603629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.603656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.603738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.603764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.603853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.603877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.603958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.603984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.604068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.604095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.604174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.604199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.604277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.604302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.604382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.604407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.604494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.604519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.604607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.604633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.604723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.604750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.604836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.604862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.604957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.604983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.605104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.605129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.605223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.605248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.605350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.605393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.605496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.605525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.605618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.605645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.605841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.605875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.605957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.605983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.606081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.606107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.606188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.606225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.606314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.606340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.606447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.606472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.606574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.606601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.606677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.606708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.606797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.606822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.606905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.606930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.607008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.607034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.607108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.607134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.607233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.607272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.607370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.607409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.607505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.607547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.607656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.607684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.607771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.607799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.607883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.607908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.607992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.608025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.529 [2024-10-11 22:58:59.608120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.529 [2024-10-11 22:58:59.608148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.529 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.608240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.608268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.608371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.608397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.608485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.608512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.608606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.608636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.608718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.608745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.608831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.608857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.608973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.609000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.609098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.609123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.609206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.609232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.609326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.609352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.609433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.609459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.609541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.609577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.609669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.609694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.609781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.609806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.609911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.609938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.610019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.610045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.610154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.610180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.610254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.610281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.610362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.610390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.610476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.610515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.610625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.610664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.610754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.610782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.610871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.610897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.610991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.611017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.611103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.611129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.611208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.611234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.611324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.611361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.611450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.611475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.611588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.611615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.611692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.611719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:56.530 [2024-10-11 22:58:59.611803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.611829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:56.530 [2024-10-11 22:58:59.611911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.611938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.530 [2024-10-11 22:58:59.612026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.612054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.530 [2024-10-11 22:58:59.612143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.612169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.612264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.612292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.612379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.612405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.612519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.612544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.612630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.612656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.612740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.612765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.612844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.612873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.612960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.612987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.613078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.613104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.613188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.613215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.613329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.613357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.613430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.613456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.613533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.613565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.613652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.613677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.613758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.613783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.613882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.613907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.613987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.614014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.614092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.614117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.614227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.614255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.614341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.614367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.614447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.614473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.614576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.614602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.614689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.614717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.614804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.614843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.614951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.614978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.615066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.615091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.615203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.615228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.615316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.615341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.615430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.615457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.615533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.615564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.615649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.615675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.615755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.615781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.615880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.615905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.615988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.530 [2024-10-11 22:58:59.616014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.530 qpair failed and we were unable to recover it. 00:35:56.530 [2024-10-11 22:58:59.616093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.616119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.616202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.616227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.616317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.616357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.616444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.616471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.616563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.616590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.616681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.616707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.616786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.616811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.616908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.616933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.617021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.617047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.617130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.617155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.617240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.617267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.617373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.617399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.617492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.617527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.617648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.617676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.617767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.617793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.617901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.617927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.618013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.618039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.618113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.618139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.618218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.618244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.618326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.618353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.618482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.618521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.618628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.618656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.618742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.618768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.618852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.618878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.618993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.619019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.619094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.619121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.619222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.619249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.619346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.619378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.619463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.619490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.619597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.619624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.619720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.619746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.619839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.619865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.619946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.619972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.620065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.620091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.620185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.620213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.620295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.620321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.620441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.620467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.620562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.620590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.620670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.620696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.620788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.620819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.620906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.620932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.621019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.621044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.621124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.621150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.621229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.621255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.621338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.621364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.621443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.621469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.621567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.621606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.621696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.621723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.621806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.621832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.621908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.621934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.622014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.622039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.622129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.622156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.622289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.622317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.622413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.622442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.622528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.622560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.622648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.622674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.622759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.622785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.622879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.622906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.622986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.623012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.623093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.623119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.623203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.623233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.623344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.623369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.623453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.623479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.623571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.623599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.623683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.623709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.623796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.623822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.623917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.623943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.624038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.624065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.624180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.624206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.624400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.624427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.624541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.624584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.624677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.624705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.624785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.624811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.624898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.624924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.625011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.625038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.625123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.625149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.625233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.625260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.625348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.531 [2024-10-11 22:58:59.625375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.531 qpair failed and we were unable to recover it. 00:35:56.531 [2024-10-11 22:58:59.625468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.625496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.625591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.625623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.625707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.625733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.625818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.625845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.625920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.625946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.626062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.626087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.626284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.626312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.626403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.626431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.626515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.626543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.626647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.626673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.626752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.626777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.626866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.626892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.626970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.626997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.627084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.627110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.627198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.627225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.627354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.627381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.627481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.627520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.627617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.627645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.627728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.627755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.627838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.627864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.627972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.627998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.628084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.628111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.628226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.628252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.628338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.628364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.628449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.628475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.628570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.628599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.628693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.628719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.628811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.628838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.628940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.628967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.629088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.629113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.629192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.629219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.629306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.629332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.629409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.629435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.629522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.629558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.629643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.629669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.629756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.629782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.629889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.629915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.630058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.630084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.630161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.630186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.630275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.630300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.630381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.630406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.630495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.630533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.630642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.630670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.630759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.630785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.630888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.630913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.630998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.631024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.631131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.631157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.631250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.631276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.631365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.631395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.631478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.631507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.631614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.631641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.631728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.631754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.631835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.631869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.631950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.631977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.632061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.632087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.632177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.632205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.632287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.632313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.632392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.632417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.632526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.632558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.632647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.632672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.632761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.632786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.632875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.632900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.632987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.633011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.633097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.633122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.633210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.633236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.633319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.633344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.633431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.633470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.633584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.633612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.633705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.633738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.633825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.633851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.633939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.633964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.634056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.634094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.532 [2024-10-11 22:58:59.634181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.532 [2024-10-11 22:58:59.634208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.532 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.634318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.634344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.634424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.634450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.634571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.634599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.634708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.634747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.634833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.634861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.634943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.634969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.635063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.635088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.635168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.635195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.635278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.635303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.635394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.635421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.635505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.635530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.635626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.635653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.635738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.635765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.635843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.635869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.635949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.635974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.636056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.636083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.636177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.636206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.636327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.636354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.636442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.636468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.636562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.636588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.636666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.636691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.636779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.636804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.636895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.636931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.637007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.637033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.637147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.637172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.637280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.637305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.637409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.637448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.637534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.637567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.637649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.637676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.637762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.637787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.637879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.637906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.637993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.638021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.638109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.638137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.638232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.638262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.638385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.638411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.638502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.638529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.638634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.638661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.638745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.638770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.638894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.638921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.638999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.639026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.639136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.639161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.639265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.639290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.639374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.639399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.639475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.639500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.639654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.639693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.639774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.639801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.639923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.639949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.640058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.640086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.640181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.640207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.640302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.640330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.640445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.640472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.640568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.640597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.640684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.640711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.640789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.640815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.640896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.640921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.641013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.641041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.641157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.641182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.641267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.641292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.641374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.641399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.641480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.641505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.641608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.641635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.641718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.641744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.641820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.641851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.641934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.641959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.642049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.642076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.642162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.642188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.642275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.642302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.642412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.642437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.642523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.642548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.642637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.642662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.642746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.642771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.642873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.642911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.643003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.643030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.643117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.643144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.643227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.643252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.643336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.643361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.643477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.643503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.533 [2024-10-11 22:58:59.643597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.533 [2024-10-11 22:58:59.643622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.533 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.643700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.643726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.643809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.643834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.643927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.643954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.644061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.644087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.644171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.644197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.644278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.644303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.644389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.644414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.644505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.644530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.644612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.644638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.644716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.644741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.644816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.644841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.644937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.644967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.645058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.645083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.645164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.645189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.645268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.645293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.645372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.645398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.645518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.645546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.645655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.645682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.645767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.645793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.645886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.645912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.646012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.646038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.646125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.646151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.646237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.646265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.646351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.646379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.646466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.646493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.646604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.646631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.646723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.646749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.646828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.646852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.646947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.646974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.647067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.647095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.647181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.647207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.647289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.647315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.647395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.647420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.647503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.647530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.647617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.647644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.647737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.647764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.647850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.647875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.647959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.647984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.648072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.648099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.648190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.648229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.648313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.648339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.648421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.648447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.648528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.648559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.648648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.648674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.648756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.648782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.648878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.648904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.648986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.649011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.649126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.649154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.649240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.649267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.649367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.649406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.649505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.649535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.649636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.649669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.649753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.649780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.649881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.649907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.649986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.650012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.650098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.650124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.650207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.650232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.650322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.650348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.650434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.650459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.650559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.650587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.650673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.650699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.650774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.650799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.650891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.650917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.651004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.651030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.651110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.651136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.651235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.651260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.651343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.651369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.651479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.651505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.651602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.651631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.651726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.651765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.651894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.651921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.652010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.652037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.652135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.652162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.652243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.652269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.652350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.652378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.652474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.652501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.652626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.534 [2024-10-11 22:58:59.652665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.534 qpair failed and we were unable to recover it. 00:35:56.534 [2024-10-11 22:58:59.652754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.652781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.652879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.652912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.652996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.653029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.653113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.653138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.653219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.653244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.653329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.653362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.653448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.653473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.653567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.653595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.653675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.653701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.653784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.653809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.653899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.653925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.654040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.654066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.654155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.654181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.654272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.654297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.654387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.654415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.654506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.654532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.654635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.654662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.654744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.654769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.654870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.654896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.654993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.655021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.655112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.655139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.655219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.655245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.655325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.655351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.655437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.655462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.655567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.655593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.655687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.655714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.655797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.655824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.655916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.655955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.656049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.656075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.656157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.656182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.656266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.656291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.656373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.656398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.656512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.656539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.656634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.656660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.656755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.656780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.656870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.656895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.656989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.657016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.657101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.657127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.657216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.657242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.657335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.657360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.657436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.657461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.657543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.657595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.535 Malloc0 00:35:56.535 qpair failed and we were unable to recover it. 00:35:56.535 [2024-10-11 22:58:59.657694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.535 [2024-10-11 22:58:59.657720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.657803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.657830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.657950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.536 [2024-10-11 22:58:59.657975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.658074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.658100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:56.536 [2024-10-11 22:58:59.658183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.658216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.536 [2024-10-11 22:58:59.658302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.658329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.536 [2024-10-11 22:58:59.658411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.658437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.658522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.658548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.658648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.658675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.658760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.658787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.658872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.658898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.658988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.659019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.659101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.659127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.659223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.659248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.659347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.659386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.659476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.659504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.659604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.659631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.659712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.659738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.659821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.659848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.659934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.659959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.660040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.660067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.660164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.660202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.660300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.660333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.660423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.660450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.660537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.660569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.660669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.660695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.660786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.660812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.660899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.660924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.661010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.661034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.661116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.661142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.661219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.661247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.661360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.661357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.536 [2024-10-11 22:58:59.661386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.661475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.661502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.661602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.661630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.661722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.661750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.661834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.661862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.661951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.661977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.662067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.662096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.662216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.662243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.662328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.662355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.662443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.662469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.662548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.662582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.536 [2024-10-11 22:58:59.662673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.536 [2024-10-11 22:58:59.662700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.536 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.662785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.662810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.662891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.662917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.662995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.663021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.663105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.663131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.663215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.663243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.663331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.663356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.663435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.663461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.663543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.663575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.663664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.663691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.663778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.663804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.663897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.663923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.664031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.664056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.664142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.664168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.664248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.664274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.664357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.664382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.664462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.664490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.664577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.664616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.664713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.664740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.664860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.664887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.664968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.664994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.665085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.665112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.665197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.665224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.665326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.665364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.665458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.665487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.665583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.665610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.665697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.665723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.665806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.665832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.665944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.665969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.666053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.666078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.666162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.666190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.666321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.666359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.666456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.666484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.666579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.666605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.666694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.666719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.666801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.666827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.666919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.666944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.667030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.667059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.667152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.667183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.667274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.667301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.667395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.667420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.667508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.667533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.667629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.667654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.667737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.667763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.537 [2024-10-11 22:58:59.667843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.537 [2024-10-11 22:58:59.667868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.537 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.667953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.667982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.668063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.668089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.668169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.668195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.668277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.668303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.668380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.668412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.668612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.668639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.668731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.668758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.668848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.668874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.668954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.668979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.669083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.669108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.669228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.669256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.669345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.669372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.669455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.669480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.669570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.538 [2024-10-11 22:58:59.669598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.669683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.669709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:56.538 [2024-10-11 22:58:59.669800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.669826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.538 [2024-10-11 22:58:59.669912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.669938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.538 [2024-10-11 22:58:59.670051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.670077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.670162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.670188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.670276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.670302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.670388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.670415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.670520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.670546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.670643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.670669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.670752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.670779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.670865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.670891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.670975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.671001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.671099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.671125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.671233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.671259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.671363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.671403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.671491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.671519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.671612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.671645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.671733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.671759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.671849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.671875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.671953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.671978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.672061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.672086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.672162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.538 [2024-10-11 22:58:59.672187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.538 qpair failed and we were unable to recover it. 00:35:56.538 [2024-10-11 22:58:59.672267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.672292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.672372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.672397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.672477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.672503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.672609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.672636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.672728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.672754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.672843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.672869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.672948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.672978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.673064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.673092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.673174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.673200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.673278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.673304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.673393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.673419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.673502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.673528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.673624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.673653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.673739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.673767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.673849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.673875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.673958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.673983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.674068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.674094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.674182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.674208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.674292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.674319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.674398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.674424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.674511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.674537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.674632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.674658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.674743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.674769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.674864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.674902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.674995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.675022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.675108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.675136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.675221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.675248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.675324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.675349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.675435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.675460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.675541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.675573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.675665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.675691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.675781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.675807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.675890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.675916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.676000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.676026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.676107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.676132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.676213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.676241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.676323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.676348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.676489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.676515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.676601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.676627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.676710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.676736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.676821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.676849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.676948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.676973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.677051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.677077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.677154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.677179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.677259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.677285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.677373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.677398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 [2024-10-11 22:58:59.677482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.539 [2024-10-11 22:58:59.677513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.539 qpair failed and we were unable to recover it. 00:35:56.539 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.539 [2024-10-11 22:58:59.677606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.677632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.677712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:56.540 [2024-10-11 22:58:59.677737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.677818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.677847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.540 [2024-10-11 22:58:59.677951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.677976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.678052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.678077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.678152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.678178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.678286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.678311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.678395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.678421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.678498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.678524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.678629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.678655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.678729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.678754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.678842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.678867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.678953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.678978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.679073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.679098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.679206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.679232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.679320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.679359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.679454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.679487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.679580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.679608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.679690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.679718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.679825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.679855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.679952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.679979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.680064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.680091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.680178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.680207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.680293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.680319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.680435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.680466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.680563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.680590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.680673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.680698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.680805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.680830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.680935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.680960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.681039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.681065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.681145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.681173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.681287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.681315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.681398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.681424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.681509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.681535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.681637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.681662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.681746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.681772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.681885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.681911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.681993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.682019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.682102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.682128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.682212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.682237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.682315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.682341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.682423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.682449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.682526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.682558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.682653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.682681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.682785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.682824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.682905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.682933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.540 [2024-10-11 22:58:59.683027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.540 [2024-10-11 22:58:59.683054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.540 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.683141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.683167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.683249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.683276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.683365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.683392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.683472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.683499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.683621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.683652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.683739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.683765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.683845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.683870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.683983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.684009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.684088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.684113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.684204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.684231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.684342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.684368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.684445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.684470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.684572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.684598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.684789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.684814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.684894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.684920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.684999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.685025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.685163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.685189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.685276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.685303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.541 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:56.541 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.541 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.541 [2024-10-11 22:58:59.686105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.686136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.686232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.686261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.686352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.686379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.686461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.686487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.686584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.686611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.686721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.686747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.686835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.686860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.686940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.686965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.687057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.687083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.687163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.687189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.687265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.687291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.687389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.687428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.687526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.687590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.687685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.687713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.687802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.687830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.687914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.687941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.688021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.688047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.688129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.688156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.688280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.688319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222c340 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.688422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.688461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.688562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.688591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.688684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.688712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.688805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.688831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.688947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.688972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.689058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.689084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.689169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.689194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d0000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.689278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.689308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3cc000b90 with addr=10.0.0.2, port=4420 00:35:56.541 qpair failed and we were unable to recover it. 00:35:56.541 [2024-10-11 22:58:59.689428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.541 [2024-10-11 22:58:59.689455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff3d8000b90 with addr=10.0.0.2, port=4420 00:35:56.542 qpair failed and we were unable to recover it. 00:35:56.542 [2024-10-11 22:58:59.689716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:56.542 [2024-10-11 22:58:59.692203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.542 [2024-10-11 22:58:59.692314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.542 [2024-10-11 22:58:59.692343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.542 [2024-10-11 22:58:59.692359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.542 [2024-10-11 22:58:59.692371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.542 [2024-10-11 22:58:59.692404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.542 qpair failed and we were unable to recover it. 00:35:56.542 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.542 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:56.542 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.542 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.542 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.542 22:58:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 400708 00:35:56.542 [2024-10-11 22:58:59.702029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.542 [2024-10-11 22:58:59.702120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.542 [2024-10-11 22:58:59.702147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.542 [2024-10-11 22:58:59.702162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.542 [2024-10-11 22:58:59.702173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.542 [2024-10-11 22:58:59.702202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.542 qpair failed and we were unable to recover it. 00:35:56.542 [2024-10-11 22:58:59.712085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.542 [2024-10-11 22:58:59.712175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.542 [2024-10-11 22:58:59.712207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.542 [2024-10-11 22:58:59.712221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.542 [2024-10-11 22:58:59.712233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.542 [2024-10-11 22:58:59.712261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.542 qpair failed and we were unable to recover it. 00:35:56.542 [2024-10-11 22:58:59.722041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.542 [2024-10-11 22:58:59.722134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.542 [2024-10-11 22:58:59.722161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.542 [2024-10-11 22:58:59.722175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.542 [2024-10-11 22:58:59.722186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.542 [2024-10-11 22:58:59.722214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.542 qpair failed and we were unable to recover it. 00:35:56.542 [2024-10-11 22:58:59.732013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.542 [2024-10-11 22:58:59.732104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.542 [2024-10-11 22:58:59.732130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.542 [2024-10-11 22:58:59.732144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.542 [2024-10-11 22:58:59.732156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.542 [2024-10-11 22:58:59.732183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.542 qpair failed and we were unable to recover it. 00:35:56.542 [2024-10-11 22:58:59.742010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.542 [2024-10-11 22:58:59.742096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.542 [2024-10-11 22:58:59.742122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.542 [2024-10-11 22:58:59.742137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.542 [2024-10-11 22:58:59.742149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.542 [2024-10-11 22:58:59.742177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.542 qpair failed and we were unable to recover it. 00:35:56.542 [2024-10-11 22:58:59.752032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.542 [2024-10-11 22:58:59.752114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.542 [2024-10-11 22:58:59.752143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.542 [2024-10-11 22:58:59.752157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.542 [2024-10-11 22:58:59.752169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.542 [2024-10-11 22:58:59.752203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.542 qpair failed and we were unable to recover it. 00:35:56.801 [2024-10-11 22:58:59.762095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.801 [2024-10-11 22:58:59.762206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.801 [2024-10-11 22:58:59.762240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.801 [2024-10-11 22:58:59.762261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.801 [2024-10-11 22:58:59.762280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.801 [2024-10-11 22:58:59.762322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.801 qpair failed and we were unable to recover it. 00:35:56.801 [2024-10-11 22:58:59.772166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.801 [2024-10-11 22:58:59.772257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.801 [2024-10-11 22:58:59.772284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.801 [2024-10-11 22:58:59.772298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.801 [2024-10-11 22:58:59.772310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.801 [2024-10-11 22:58:59.772339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.801 qpair failed and we were unable to recover it. 00:35:56.801 [2024-10-11 22:58:59.782184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.801 [2024-10-11 22:58:59.782279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.801 [2024-10-11 22:58:59.782305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.801 [2024-10-11 22:58:59.782319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.801 [2024-10-11 22:58:59.782331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.801 [2024-10-11 22:58:59.782359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.801 qpair failed and we were unable to recover it. 00:35:56.801 [2024-10-11 22:58:59.792193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.801 [2024-10-11 22:58:59.792279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.801 [2024-10-11 22:58:59.792304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.801 [2024-10-11 22:58:59.792318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.801 [2024-10-11 22:58:59.792330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.801 [2024-10-11 22:58:59.792358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.801 qpair failed and we were unable to recover it. 00:35:56.801 [2024-10-11 22:58:59.802289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.801 [2024-10-11 22:58:59.802378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.801 [2024-10-11 22:58:59.802409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.801 [2024-10-11 22:58:59.802423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.801 [2024-10-11 22:58:59.802435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.801 [2024-10-11 22:58:59.802463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.801 qpair failed and we were unable to recover it. 00:35:56.801 [2024-10-11 22:58:59.812234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.801 [2024-10-11 22:58:59.812353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.801 [2024-10-11 22:58:59.812379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.801 [2024-10-11 22:58:59.812393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.801 [2024-10-11 22:58:59.812405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.801 [2024-10-11 22:58:59.812439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.801 qpair failed and we were unable to recover it. 00:35:56.801 [2024-10-11 22:58:59.822239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.801 [2024-10-11 22:58:59.822335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.801 [2024-10-11 22:58:59.822361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.801 [2024-10-11 22:58:59.822375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.801 [2024-10-11 22:58:59.822387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.801 [2024-10-11 22:58:59.822416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.801 qpair failed and we were unable to recover it. 00:35:56.801 [2024-10-11 22:58:59.832299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.801 [2024-10-11 22:58:59.832423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.801 [2024-10-11 22:58:59.832448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.801 [2024-10-11 22:58:59.832462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.801 [2024-10-11 22:58:59.832474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.801 [2024-10-11 22:58:59.832501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.801 qpair failed and we were unable to recover it. 00:35:56.801 [2024-10-11 22:58:59.842293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.801 [2024-10-11 22:58:59.842386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.801 [2024-10-11 22:58:59.842411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.801 [2024-10-11 22:58:59.842424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.801 [2024-10-11 22:58:59.842436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.801 [2024-10-11 22:58:59.842472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.801 qpair failed and we were unable to recover it. 00:35:56.801 [2024-10-11 22:58:59.852302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.852392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.852419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.852432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.852444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.852472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.862329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.862428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.862454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.862468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.862480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.862508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.872377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.872455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.872480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.872495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.872507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.872535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.882398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.882489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.882514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.882528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.882540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.882577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.892412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.892494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.892526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.892542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.892564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.892593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.902435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.902522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.902548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.902573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.902585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.902614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.912471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.912564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.912601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.912615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.912626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.912655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.922510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.922613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.922638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.922652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.922664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.922692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.932538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.932634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.932659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.932673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.932690] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.932719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.942571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.942657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.942682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.942696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.942708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.942736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.952598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.952682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.952707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.952720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.952732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.952760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.962637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.962724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.962749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.962763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.962774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.962802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.972660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.972747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.972772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.972787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.972799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.972829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.982689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.982780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.982806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.802 [2024-10-11 22:58:59.982820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.802 [2024-10-11 22:58:59.982831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.802 [2024-10-11 22:58:59.982860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.802 qpair failed and we were unable to recover it. 00:35:56.802 [2024-10-11 22:58:59.992715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.802 [2024-10-11 22:58:59.992803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.802 [2024-10-11 22:58:59.992828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.803 [2024-10-11 22:58:59.992842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.803 [2024-10-11 22:58:59.992854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.803 [2024-10-11 22:58:59.992882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.803 qpair failed and we were unable to recover it. 00:35:56.803 [2024-10-11 22:59:00.002796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.803 [2024-10-11 22:59:00.002909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.803 [2024-10-11 22:59:00.002935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.803 [2024-10-11 22:59:00.002950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.803 [2024-10-11 22:59:00.002962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.803 [2024-10-11 22:59:00.002990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.803 qpair failed and we were unable to recover it. 00:35:56.803 [2024-10-11 22:59:00.012811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.803 [2024-10-11 22:59:00.012907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.803 [2024-10-11 22:59:00.012935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.803 [2024-10-11 22:59:00.012950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.803 [2024-10-11 22:59:00.012962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.803 [2024-10-11 22:59:00.012991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.803 qpair failed and we were unable to recover it. 00:35:56.803 [2024-10-11 22:59:00.022828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.803 [2024-10-11 22:59:00.022918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.803 [2024-10-11 22:59:00.022944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.803 [2024-10-11 22:59:00.022959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.803 [2024-10-11 22:59:00.022977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.803 [2024-10-11 22:59:00.023007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.803 qpair failed and we were unable to recover it. 00:35:56.803 [2024-10-11 22:59:00.032872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.803 [2024-10-11 22:59:00.032959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.803 [2024-10-11 22:59:00.032984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.803 [2024-10-11 22:59:00.032998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.803 [2024-10-11 22:59:00.033010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.803 [2024-10-11 22:59:00.033038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.803 qpair failed and we were unable to recover it. 00:35:56.803 [2024-10-11 22:59:00.042972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.803 [2024-10-11 22:59:00.043076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.803 [2024-10-11 22:59:00.043103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.803 [2024-10-11 22:59:00.043118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.803 [2024-10-11 22:59:00.043130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.803 [2024-10-11 22:59:00.043158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.803 qpair failed and we were unable to recover it. 00:35:56.803 [2024-10-11 22:59:00.052917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.803 [2024-10-11 22:59:00.053000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.803 [2024-10-11 22:59:00.053027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.803 [2024-10-11 22:59:00.053040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.803 [2024-10-11 22:59:00.053053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.803 [2024-10-11 22:59:00.053082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.803 qpair failed and we were unable to recover it. 00:35:56.803 [2024-10-11 22:59:00.062961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.803 [2024-10-11 22:59:00.063051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.803 [2024-10-11 22:59:00.063076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.803 [2024-10-11 22:59:00.063090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.803 [2024-10-11 22:59:00.063102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:56.803 [2024-10-11 22:59:00.063131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.803 qpair failed and we were unable to recover it. 00:35:57.062 [2024-10-11 22:59:00.072972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.062 [2024-10-11 22:59:00.073063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.062 [2024-10-11 22:59:00.073091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.062 [2024-10-11 22:59:00.073105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.062 [2024-10-11 22:59:00.073117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.062 [2024-10-11 22:59:00.073147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-10-11 22:59:00.083076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.062 [2024-10-11 22:59:00.083172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.062 [2024-10-11 22:59:00.083199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.062 [2024-10-11 22:59:00.083214] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.062 [2024-10-11 22:59:00.083226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.062 [2024-10-11 22:59:00.083254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-10-11 22:59:00.093165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.062 [2024-10-11 22:59:00.093293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.062 [2024-10-11 22:59:00.093319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.062 [2024-10-11 22:59:00.093333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.062 [2024-10-11 22:59:00.093345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.062 [2024-10-11 22:59:00.093374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-10-11 22:59:00.103040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.062 [2024-10-11 22:59:00.103127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.062 [2024-10-11 22:59:00.103153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.062 [2024-10-11 22:59:00.103167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.062 [2024-10-11 22:59:00.103180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.062 [2024-10-11 22:59:00.103208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-10-11 22:59:00.113071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.062 [2024-10-11 22:59:00.113151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.062 [2024-10-11 22:59:00.113177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.062 [2024-10-11 22:59:00.113191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.062 [2024-10-11 22:59:00.113209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.062 [2024-10-11 22:59:00.113240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-10-11 22:59:00.123156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.062 [2024-10-11 22:59:00.123264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.062 [2024-10-11 22:59:00.123290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.062 [2024-10-11 22:59:00.123304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.062 [2024-10-11 22:59:00.123315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.062 [2024-10-11 22:59:00.123344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-10-11 22:59:00.133132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.062 [2024-10-11 22:59:00.133217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.062 [2024-10-11 22:59:00.133243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.062 [2024-10-11 22:59:00.133257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.062 [2024-10-11 22:59:00.133268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.062 [2024-10-11 22:59:00.133296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-10-11 22:59:00.143165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.062 [2024-10-11 22:59:00.143251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.062 [2024-10-11 22:59:00.143276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.062 [2024-10-11 22:59:00.143290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.062 [2024-10-11 22:59:00.143301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.062 [2024-10-11 22:59:00.143329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-10-11 22:59:00.153173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.062 [2024-10-11 22:59:00.153256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.062 [2024-10-11 22:59:00.153281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.062 [2024-10-11 22:59:00.153295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.062 [2024-10-11 22:59:00.153306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.062 [2024-10-11 22:59:00.153335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-10-11 22:59:00.163273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.062 [2024-10-11 22:59:00.163367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.062 [2024-10-11 22:59:00.163392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.062 [2024-10-11 22:59:00.163406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.163417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.163446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.173279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.173413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.173438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.173452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.173463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.173492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.183302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.183417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.183443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.183457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.183469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.183497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.193299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.193381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.193405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.193419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.193431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.193459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.203335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.203454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.203479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.203493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.203510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.203540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.213407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.213493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.213519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.213532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.213544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.213582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.223410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.223528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.223560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.223577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.223589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.223617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.233431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.233515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.233540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.233564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.233577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.233605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.243494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.243612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.243638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.243651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.243663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.243691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.253474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.253580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.253606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.253620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.253632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.253660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.263598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.263684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.263710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.263724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.263736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.263764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.273520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.273626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.273651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.273665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.273677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.273705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.283594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.283687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.283713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.283726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.283738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.283766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.293585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.293667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.293692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.293711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.293724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.063 [2024-10-11 22:59:00.293751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-10-11 22:59:00.303620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.063 [2024-10-11 22:59:00.303707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.063 [2024-10-11 22:59:00.303732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.063 [2024-10-11 22:59:00.303746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.063 [2024-10-11 22:59:00.303758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.064 [2024-10-11 22:59:00.303786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-10-11 22:59:00.313716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.064 [2024-10-11 22:59:00.313799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.064 [2024-10-11 22:59:00.313823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.064 [2024-10-11 22:59:00.313836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.064 [2024-10-11 22:59:00.313848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.064 [2024-10-11 22:59:00.313875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-10-11 22:59:00.323712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.064 [2024-10-11 22:59:00.323803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.064 [2024-10-11 22:59:00.323828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.064 [2024-10-11 22:59:00.323843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.064 [2024-10-11 22:59:00.323854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.064 [2024-10-11 22:59:00.323882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.322 [2024-10-11 22:59:00.333720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.322 [2024-10-11 22:59:00.333802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.322 [2024-10-11 22:59:00.333829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.322 [2024-10-11 22:59:00.333844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.322 [2024-10-11 22:59:00.333856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.322 [2024-10-11 22:59:00.333885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.322 qpair failed and we were unable to recover it. 00:35:57.322 [2024-10-11 22:59:00.343783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.343896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.343922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.343936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.343950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.343979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.353764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.353850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.353875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.353889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.353902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.353931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.363810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.363900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.363925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.363939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.363952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.363980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.373843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.373962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.373986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.374000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.374013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.374041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.383934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.384028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.384053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.384073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.384085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.384128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.393912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.394034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.394059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.394073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.394086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.394116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.403944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.404036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.404060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.404074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.404086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.404114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.413945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.414032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.414056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.414070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.414081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.414109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.424024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.424146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.424170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.424185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.424198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.424227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.434008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.434135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.434160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.434174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.434187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.434216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.444102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.444193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.444218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.444232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.444245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.444273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.454062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.454151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.454175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.454189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.454202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.454230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.464169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.464256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.464281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.464296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.464309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.323 [2024-10-11 22:59:00.464354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.323 qpair failed and we were unable to recover it. 00:35:57.323 [2024-10-11 22:59:00.474139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.323 [2024-10-11 22:59:00.474223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.323 [2024-10-11 22:59:00.474248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.323 [2024-10-11 22:59:00.474270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.323 [2024-10-11 22:59:00.474283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.324 [2024-10-11 22:59:00.474312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.324 qpair failed and we were unable to recover it. 00:35:57.324 [2024-10-11 22:59:00.484195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.324 [2024-10-11 22:59:00.484289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.324 [2024-10-11 22:59:00.484314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.324 [2024-10-11 22:59:00.484329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.324 [2024-10-11 22:59:00.484341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.324 [2024-10-11 22:59:00.484370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.324 qpair failed and we were unable to recover it. 00:35:57.324 [2024-10-11 22:59:00.494325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.324 [2024-10-11 22:59:00.494428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.324 [2024-10-11 22:59:00.494452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.324 [2024-10-11 22:59:00.494467] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.324 [2024-10-11 22:59:00.494479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.324 [2024-10-11 22:59:00.494508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.324 qpair failed and we were unable to recover it. 00:35:57.324 [2024-10-11 22:59:00.504287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.324 [2024-10-11 22:59:00.504406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.324 [2024-10-11 22:59:00.504431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.324 [2024-10-11 22:59:00.504445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.324 [2024-10-11 22:59:00.504458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.324 [2024-10-11 22:59:00.504487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.324 qpair failed and we were unable to recover it. 00:35:57.324 [2024-10-11 22:59:00.514270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.324 [2024-10-11 22:59:00.514367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.324 [2024-10-11 22:59:00.514392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.324 [2024-10-11 22:59:00.514406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.324 [2024-10-11 22:59:00.514419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.324 [2024-10-11 22:59:00.514448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.324 qpair failed and we were unable to recover it. 00:35:57.324 [2024-10-11 22:59:00.524338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.324 [2024-10-11 22:59:00.524463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.324 [2024-10-11 22:59:00.524487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.324 [2024-10-11 22:59:00.524501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.324 [2024-10-11 22:59:00.524515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.324 [2024-10-11 22:59:00.524543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.324 qpair failed and we were unable to recover it. 00:35:57.324 [2024-10-11 22:59:00.534320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.324 [2024-10-11 22:59:00.534404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.324 [2024-10-11 22:59:00.534428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.324 [2024-10-11 22:59:00.534442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.324 [2024-10-11 22:59:00.534455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.324 [2024-10-11 22:59:00.534483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.324 qpair failed and we were unable to recover it. 00:35:57.324 [2024-10-11 22:59:00.544325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.324 [2024-10-11 22:59:00.544415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.324 [2024-10-11 22:59:00.544441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.324 [2024-10-11 22:59:00.544455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.324 [2024-10-11 22:59:00.544479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.324 [2024-10-11 22:59:00.544508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.324 qpair failed and we were unable to recover it. 00:35:57.324 [2024-10-11 22:59:00.554345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.324 [2024-10-11 22:59:00.554430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.324 [2024-10-11 22:59:00.554454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.324 [2024-10-11 22:59:00.554468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.324 [2024-10-11 22:59:00.554481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.324 [2024-10-11 22:59:00.554509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.324 qpair failed and we were unable to recover it. 00:35:57.324 [2024-10-11 22:59:00.564384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.324 [2024-10-11 22:59:00.564470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.324 [2024-10-11 22:59:00.564494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.324 [2024-10-11 22:59:00.564513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.324 [2024-10-11 22:59:00.564527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.324 [2024-10-11 22:59:00.564565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.324 qpair failed and we were unable to recover it. 00:35:57.324 [2024-10-11 22:59:00.574425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.324 [2024-10-11 22:59:00.574515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.324 [2024-10-11 22:59:00.574539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.324 [2024-10-11 22:59:00.574563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.324 [2024-10-11 22:59:00.574577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.324 [2024-10-11 22:59:00.574607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.324 qpair failed and we were unable to recover it. 00:35:57.324 [2024-10-11 22:59:00.584443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.324 [2024-10-11 22:59:00.584532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.324 [2024-10-11 22:59:00.584566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.324 [2024-10-11 22:59:00.584582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.324 [2024-10-11 22:59:00.584598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.324 [2024-10-11 22:59:00.584628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.324 qpair failed and we were unable to recover it. 00:35:57.583 [2024-10-11 22:59:00.594461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.583 [2024-10-11 22:59:00.594596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.583 [2024-10-11 22:59:00.594622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.583 [2024-10-11 22:59:00.594637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.583 [2024-10-11 22:59:00.594652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.583 [2024-10-11 22:59:00.594695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.583 qpair failed and we were unable to recover it. 00:35:57.583 [2024-10-11 22:59:00.604547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.583 [2024-10-11 22:59:00.604713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.583 [2024-10-11 22:59:00.604740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.583 [2024-10-11 22:59:00.604755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.583 [2024-10-11 22:59:00.604767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.583 [2024-10-11 22:59:00.604798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.583 qpair failed and we were unable to recover it. 00:35:57.583 [2024-10-11 22:59:00.614501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.583 [2024-10-11 22:59:00.614597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.583 [2024-10-11 22:59:00.614623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.583 [2024-10-11 22:59:00.614637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.583 [2024-10-11 22:59:00.614649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.583 [2024-10-11 22:59:00.614679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.583 qpair failed and we were unable to recover it. 00:35:57.583 [2024-10-11 22:59:00.624624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.583 [2024-10-11 22:59:00.624719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.583 [2024-10-11 22:59:00.624744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.583 [2024-10-11 22:59:00.624758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.583 [2024-10-11 22:59:00.624772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.583 [2024-10-11 22:59:00.624800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.583 qpair failed and we were unable to recover it. 00:35:57.583 [2024-10-11 22:59:00.634584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.583 [2024-10-11 22:59:00.634695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.634719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.634733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.584 [2024-10-11 22:59:00.634746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.584 [2024-10-11 22:59:00.634775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.584 qpair failed and we were unable to recover it. 00:35:57.584 [2024-10-11 22:59:00.644609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.584 [2024-10-11 22:59:00.644697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.644721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.644735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.584 [2024-10-11 22:59:00.644747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.584 [2024-10-11 22:59:00.644777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.584 qpair failed and we were unable to recover it. 00:35:57.584 [2024-10-11 22:59:00.654660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.584 [2024-10-11 22:59:00.654749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.654782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.654797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.584 [2024-10-11 22:59:00.654810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.584 [2024-10-11 22:59:00.654838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.584 qpair failed and we were unable to recover it. 00:35:57.584 [2024-10-11 22:59:00.664675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.584 [2024-10-11 22:59:00.664764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.664788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.664803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.584 [2024-10-11 22:59:00.664816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.584 [2024-10-11 22:59:00.664845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.584 qpair failed and we were unable to recover it. 00:35:57.584 [2024-10-11 22:59:00.674760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.584 [2024-10-11 22:59:00.674881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.674907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.674921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.584 [2024-10-11 22:59:00.674935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.584 [2024-10-11 22:59:00.674963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.584 qpair failed and we were unable to recover it. 00:35:57.584 [2024-10-11 22:59:00.684720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.584 [2024-10-11 22:59:00.684813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.684837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.684852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.584 [2024-10-11 22:59:00.684864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.584 [2024-10-11 22:59:00.684893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.584 qpair failed and we were unable to recover it. 00:35:57.584 [2024-10-11 22:59:00.694742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.584 [2024-10-11 22:59:00.694843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.694868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.694882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.584 [2024-10-11 22:59:00.694894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.584 [2024-10-11 22:59:00.694922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.584 qpair failed and we were unable to recover it. 00:35:57.584 [2024-10-11 22:59:00.704799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.584 [2024-10-11 22:59:00.704929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.704956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.704971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.584 [2024-10-11 22:59:00.704984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.584 [2024-10-11 22:59:00.705014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.584 qpair failed and we were unable to recover it. 00:35:57.584 [2024-10-11 22:59:00.714803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.584 [2024-10-11 22:59:00.714883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.714907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.714922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.584 [2024-10-11 22:59:00.714934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.584 [2024-10-11 22:59:00.714963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.584 qpair failed and we were unable to recover it. 00:35:57.584 [2024-10-11 22:59:00.724867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.584 [2024-10-11 22:59:00.724993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.725019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.725034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.584 [2024-10-11 22:59:00.725047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.584 [2024-10-11 22:59:00.725076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.584 qpair failed and we were unable to recover it. 00:35:57.584 [2024-10-11 22:59:00.734873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.584 [2024-10-11 22:59:00.734964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.734989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.735003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.584 [2024-10-11 22:59:00.735016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.584 [2024-10-11 22:59:00.735045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.584 qpair failed and we were unable to recover it. 00:35:57.584 [2024-10-11 22:59:00.744898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.584 [2024-10-11 22:59:00.744993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.745026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.745043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.584 [2024-10-11 22:59:00.745056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.584 [2024-10-11 22:59:00.745087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.584 qpair failed and we were unable to recover it. 00:35:57.584 [2024-10-11 22:59:00.754902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.584 [2024-10-11 22:59:00.754989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.584 [2024-10-11 22:59:00.755014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.584 [2024-10-11 22:59:00.755028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.585 [2024-10-11 22:59:00.755041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.585 [2024-10-11 22:59:00.755071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.585 qpair failed and we were unable to recover it. 00:35:57.585 [2024-10-11 22:59:00.764975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.585 [2024-10-11 22:59:00.765064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.585 [2024-10-11 22:59:00.765090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.585 [2024-10-11 22:59:00.765105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.585 [2024-10-11 22:59:00.765117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.585 [2024-10-11 22:59:00.765148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.585 qpair failed and we were unable to recover it. 00:35:57.585 [2024-10-11 22:59:00.774985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.585 [2024-10-11 22:59:00.775073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.585 [2024-10-11 22:59:00.775097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.585 [2024-10-11 22:59:00.775112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.585 [2024-10-11 22:59:00.775125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.585 [2024-10-11 22:59:00.775154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.585 qpair failed and we were unable to recover it. 00:35:57.585 [2024-10-11 22:59:00.784991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.585 [2024-10-11 22:59:00.785084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.585 [2024-10-11 22:59:00.785109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.585 [2024-10-11 22:59:00.785123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.585 [2024-10-11 22:59:00.785136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.585 [2024-10-11 22:59:00.785171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.585 qpair failed and we were unable to recover it. 00:35:57.585 [2024-10-11 22:59:00.795001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.585 [2024-10-11 22:59:00.795083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.585 [2024-10-11 22:59:00.795108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.585 [2024-10-11 22:59:00.795123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.585 [2024-10-11 22:59:00.795135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.585 [2024-10-11 22:59:00.795164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.585 qpair failed and we were unable to recover it. 00:35:57.585 [2024-10-11 22:59:00.805077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.585 [2024-10-11 22:59:00.805164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.585 [2024-10-11 22:59:00.805190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.585 [2024-10-11 22:59:00.805205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.585 [2024-10-11 22:59:00.805217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.585 [2024-10-11 22:59:00.805246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.585 qpair failed and we were unable to recover it. 00:35:57.585 [2024-10-11 22:59:00.815114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.585 [2024-10-11 22:59:00.815200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.585 [2024-10-11 22:59:00.815226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.585 [2024-10-11 22:59:00.815241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.585 [2024-10-11 22:59:00.815254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.585 [2024-10-11 22:59:00.815282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.585 qpair failed and we were unable to recover it. 00:35:57.585 [2024-10-11 22:59:00.825109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.585 [2024-10-11 22:59:00.825201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.585 [2024-10-11 22:59:00.825226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.585 [2024-10-11 22:59:00.825240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.585 [2024-10-11 22:59:00.825253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.585 [2024-10-11 22:59:00.825282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.585 qpair failed and we were unable to recover it. 00:35:57.585 [2024-10-11 22:59:00.835133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.585 [2024-10-11 22:59:00.835212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.585 [2024-10-11 22:59:00.835241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.585 [2024-10-11 22:59:00.835256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.585 [2024-10-11 22:59:00.835269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.585 [2024-10-11 22:59:00.835299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.585 qpair failed and we were unable to recover it. 00:35:57.585 [2024-10-11 22:59:00.845188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.585 [2024-10-11 22:59:00.845278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.585 [2024-10-11 22:59:00.845304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.585 [2024-10-11 22:59:00.845318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.585 [2024-10-11 22:59:00.845332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.585 [2024-10-11 22:59:00.845362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.585 qpair failed and we were unable to recover it. 00:35:57.844 [2024-10-11 22:59:00.855190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.844 [2024-10-11 22:59:00.855278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.844 [2024-10-11 22:59:00.855304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.844 [2024-10-11 22:59:00.855320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.844 [2024-10-11 22:59:00.855333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.844 [2024-10-11 22:59:00.855363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.844 qpair failed and we were unable to recover it. 00:35:57.844 [2024-10-11 22:59:00.865307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.844 [2024-10-11 22:59:00.865401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.844 [2024-10-11 22:59:00.865426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.844 [2024-10-11 22:59:00.865441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.844 [2024-10-11 22:59:00.865454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.844 [2024-10-11 22:59:00.865484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.844 qpair failed and we were unable to recover it. 00:35:57.844 [2024-10-11 22:59:00.875361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.844 [2024-10-11 22:59:00.875485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.844 [2024-10-11 22:59:00.875510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.844 [2024-10-11 22:59:00.875525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.844 [2024-10-11 22:59:00.875538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.844 [2024-10-11 22:59:00.875592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:00.885301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:00.885419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:00.885447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:00.885462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:00.885474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:00.885503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:00.895327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:00.895423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:00.895448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:00.895463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:00.895479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:00.895509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:00.905344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:00.905429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:00.905455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:00.905470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:00.905482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:00.905511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:00.915378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:00.915469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:00.915494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:00.915509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:00.915521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:00.915558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:00.925398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:00.925500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:00.925532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:00.925548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:00.925573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:00.925602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:00.935428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:00.935520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:00.935545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:00.935569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:00.935583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:00.935612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:00.945459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:00.945595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:00.945622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:00.945637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:00.945649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:00.945679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:00.955489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:00.955577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:00.955603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:00.955617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:00.955630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:00.955659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:00.965500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:00.965609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:00.965633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:00.965648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:00.965660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:00.965693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:00.975512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:00.975622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:00.975647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:00.975661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:00.975674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:00.975703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:00.985596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:00.985705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:00.985731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:00.985746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:00.985759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:00.985788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:00.995660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:00.995752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:00.995777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:00.995792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:00.995804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:00.995832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:01.005672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:01.005770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:01.005794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:01.005809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:01.005821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.845 [2024-10-11 22:59:01.005855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.845 qpair failed and we were unable to recover it. 00:35:57.845 [2024-10-11 22:59:01.015658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.845 [2024-10-11 22:59:01.015753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.845 [2024-10-11 22:59:01.015784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.845 [2024-10-11 22:59:01.015799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.845 [2024-10-11 22:59:01.015812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.846 [2024-10-11 22:59:01.015841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.846 qpair failed and we were unable to recover it. 00:35:57.846 [2024-10-11 22:59:01.025678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.846 [2024-10-11 22:59:01.025769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.846 [2024-10-11 22:59:01.025793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.846 [2024-10-11 22:59:01.025807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.846 [2024-10-11 22:59:01.025819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.846 [2024-10-11 22:59:01.025848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.846 qpair failed and we were unable to recover it. 00:35:57.846 [2024-10-11 22:59:01.035696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.846 [2024-10-11 22:59:01.035795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.846 [2024-10-11 22:59:01.035821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.846 [2024-10-11 22:59:01.035835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.846 [2024-10-11 22:59:01.035848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.846 [2024-10-11 22:59:01.035877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.846 qpair failed and we were unable to recover it. 00:35:57.846 [2024-10-11 22:59:01.045749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.846 [2024-10-11 22:59:01.045868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.846 [2024-10-11 22:59:01.045894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.846 [2024-10-11 22:59:01.045908] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.846 [2024-10-11 22:59:01.045920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.846 [2024-10-11 22:59:01.045948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.846 qpair failed and we were unable to recover it. 00:35:57.846 [2024-10-11 22:59:01.055771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.846 [2024-10-11 22:59:01.055881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.846 [2024-10-11 22:59:01.055910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.846 [2024-10-11 22:59:01.055927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.846 [2024-10-11 22:59:01.055940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.846 [2024-10-11 22:59:01.055975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.846 qpair failed and we were unable to recover it. 00:35:57.846 [2024-10-11 22:59:01.065792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.846 [2024-10-11 22:59:01.065891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.846 [2024-10-11 22:59:01.065916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.846 [2024-10-11 22:59:01.065931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.846 [2024-10-11 22:59:01.065943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.846 [2024-10-11 22:59:01.065971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.846 qpair failed and we were unable to recover it. 00:35:57.846 [2024-10-11 22:59:01.075858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.846 [2024-10-11 22:59:01.075980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.846 [2024-10-11 22:59:01.076006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.846 [2024-10-11 22:59:01.076021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.846 [2024-10-11 22:59:01.076034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.846 [2024-10-11 22:59:01.076062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.846 qpair failed and we were unable to recover it. 00:35:57.846 [2024-10-11 22:59:01.085903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.846 [2024-10-11 22:59:01.085997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.846 [2024-10-11 22:59:01.086022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.846 [2024-10-11 22:59:01.086036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.846 [2024-10-11 22:59:01.086048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.846 [2024-10-11 22:59:01.086077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.846 qpair failed and we were unable to recover it. 00:35:57.846 [2024-10-11 22:59:01.095969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.846 [2024-10-11 22:59:01.096055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.846 [2024-10-11 22:59:01.096080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.846 [2024-10-11 22:59:01.096094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.846 [2024-10-11 22:59:01.096106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.846 [2024-10-11 22:59:01.096135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.846 qpair failed and we were unable to recover it. 00:35:57.846 [2024-10-11 22:59:01.105921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.846 [2024-10-11 22:59:01.106045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.846 [2024-10-11 22:59:01.106077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.846 [2024-10-11 22:59:01.106092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.846 [2024-10-11 22:59:01.106105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:57.846 [2024-10-11 22:59:01.106133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.846 qpair failed and we were unable to recover it. 00:35:58.105 [2024-10-11 22:59:01.115928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.105 [2024-10-11 22:59:01.116016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.105 [2024-10-11 22:59:01.116041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.105 [2024-10-11 22:59:01.116055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.105 [2024-10-11 22:59:01.116068] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.105 [2024-10-11 22:59:01.116096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.105 qpair failed and we were unable to recover it. 00:35:58.105 [2024-10-11 22:59:01.125966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.105 [2024-10-11 22:59:01.126066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.105 [2024-10-11 22:59:01.126090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.105 [2024-10-11 22:59:01.126104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.105 [2024-10-11 22:59:01.126116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.105 [2024-10-11 22:59:01.126145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.105 qpair failed and we were unable to recover it. 00:35:58.105 [2024-10-11 22:59:01.135985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.105 [2024-10-11 22:59:01.136074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.105 [2024-10-11 22:59:01.136098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.105 [2024-10-11 22:59:01.136112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.105 [2024-10-11 22:59:01.136125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.105 [2024-10-11 22:59:01.136153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.105 qpair failed and we were unable to recover it. 00:35:58.105 [2024-10-11 22:59:01.146125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.105 [2024-10-11 22:59:01.146214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.105 [2024-10-11 22:59:01.146241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.105 [2024-10-11 22:59:01.146256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.105 [2024-10-11 22:59:01.146274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.105 [2024-10-11 22:59:01.146304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.105 qpair failed and we were unable to recover it. 00:35:58.105 [2024-10-11 22:59:01.156056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.105 [2024-10-11 22:59:01.156176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.105 [2024-10-11 22:59:01.156202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.105 [2024-10-11 22:59:01.156217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.105 [2024-10-11 22:59:01.156229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.105 [2024-10-11 22:59:01.156258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.105 qpair failed and we were unable to recover it. 00:35:58.105 [2024-10-11 22:59:01.166107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.105 [2024-10-11 22:59:01.166218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.105 [2024-10-11 22:59:01.166245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.105 [2024-10-11 22:59:01.166260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.105 [2024-10-11 22:59:01.166272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.166300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.176140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.176232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.176257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.176271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.176283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.176311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.186172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.186272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.186298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.186313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.186325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.186354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.196215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.196356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.196386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.196402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.196415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.196445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.206245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.206347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.206372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.206386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.206398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.206428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.216251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.216370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.216395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.216409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.216422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.216449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.226261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.226350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.226375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.226390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.226403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.226432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.236330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.236430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.236456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.236471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.236488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.236518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.246324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.246422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.246447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.246461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.246473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.246502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.256355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.256457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.256483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.256497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.256509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.256547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.266386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.266508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.266544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.266569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.266583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.266611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.276392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.276485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.276510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.276524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.276536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.276574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.286442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.286588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.286614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.286629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.286641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.286670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.296567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.296664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.296688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.296703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.296715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.296746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.306509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.106 [2024-10-11 22:59:01.306613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.106 [2024-10-11 22:59:01.306640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.106 [2024-10-11 22:59:01.306655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.106 [2024-10-11 22:59:01.306668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.106 [2024-10-11 22:59:01.306697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.106 qpair failed and we were unable to recover it. 00:35:58.106 [2024-10-11 22:59:01.316509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.107 [2024-10-11 22:59:01.316603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.107 [2024-10-11 22:59:01.316628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.107 [2024-10-11 22:59:01.316642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.107 [2024-10-11 22:59:01.316654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.107 [2024-10-11 22:59:01.316686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.107 qpair failed and we were unable to recover it. 00:35:58.107 [2024-10-11 22:59:01.326579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.107 [2024-10-11 22:59:01.326670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.107 [2024-10-11 22:59:01.326696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.107 [2024-10-11 22:59:01.326711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.107 [2024-10-11 22:59:01.326729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.107 [2024-10-11 22:59:01.326758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.107 qpair failed and we were unable to recover it. 00:35:58.107 [2024-10-11 22:59:01.336603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.107 [2024-10-11 22:59:01.336708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.107 [2024-10-11 22:59:01.336734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.107 [2024-10-11 22:59:01.336749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.107 [2024-10-11 22:59:01.336761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.107 [2024-10-11 22:59:01.336791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.107 qpair failed and we were unable to recover it. 00:35:58.107 [2024-10-11 22:59:01.346613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.107 [2024-10-11 22:59:01.346702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.107 [2024-10-11 22:59:01.346727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.107 [2024-10-11 22:59:01.346741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.107 [2024-10-11 22:59:01.346754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.107 [2024-10-11 22:59:01.346783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.107 qpair failed and we were unable to recover it. 00:35:58.107 [2024-10-11 22:59:01.356631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.107 [2024-10-11 22:59:01.356716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.107 [2024-10-11 22:59:01.356740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.107 [2024-10-11 22:59:01.356754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.107 [2024-10-11 22:59:01.356767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.107 [2024-10-11 22:59:01.356795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.107 qpair failed and we were unable to recover it. 00:35:58.107 [2024-10-11 22:59:01.366670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.107 [2024-10-11 22:59:01.366788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.107 [2024-10-11 22:59:01.366814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.107 [2024-10-11 22:59:01.366828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.107 [2024-10-11 22:59:01.366841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.107 [2024-10-11 22:59:01.366868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.107 qpair failed and we were unable to recover it. 00:35:58.366 [2024-10-11 22:59:01.376688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.366 [2024-10-11 22:59:01.376781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.366 [2024-10-11 22:59:01.376805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.366 [2024-10-11 22:59:01.376819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.366 [2024-10-11 22:59:01.376831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.366 [2024-10-11 22:59:01.376859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.366 qpair failed and we were unable to recover it. 00:35:58.366 [2024-10-11 22:59:01.386722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.366 [2024-10-11 22:59:01.386810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.366 [2024-10-11 22:59:01.386834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.366 [2024-10-11 22:59:01.386849] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.366 [2024-10-11 22:59:01.386861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.366 [2024-10-11 22:59:01.386890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.366 qpair failed and we were unable to recover it. 00:35:58.366 [2024-10-11 22:59:01.396736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.366 [2024-10-11 22:59:01.396877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.366 [2024-10-11 22:59:01.396903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.366 [2024-10-11 22:59:01.396918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.366 [2024-10-11 22:59:01.396930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.366 [2024-10-11 22:59:01.396958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.366 qpair failed and we were unable to recover it. 00:35:58.366 [2024-10-11 22:59:01.406790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.366 [2024-10-11 22:59:01.406881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.366 [2024-10-11 22:59:01.406905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.366 [2024-10-11 22:59:01.406919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.366 [2024-10-11 22:59:01.406932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.366 [2024-10-11 22:59:01.406960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.366 qpair failed and we were unable to recover it. 00:35:58.366 [2024-10-11 22:59:01.416816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.366 [2024-10-11 22:59:01.416912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.366 [2024-10-11 22:59:01.416938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.366 [2024-10-11 22:59:01.416953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.366 [2024-10-11 22:59:01.416970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.366 [2024-10-11 22:59:01.417000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.366 qpair failed and we were unable to recover it. 00:35:58.366 [2024-10-11 22:59:01.426835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.366 [2024-10-11 22:59:01.426926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.366 [2024-10-11 22:59:01.426952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.366 [2024-10-11 22:59:01.426967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.366 [2024-10-11 22:59:01.426979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.366 [2024-10-11 22:59:01.427009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.366 qpair failed and we were unable to recover it. 00:35:58.366 [2024-10-11 22:59:01.436858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.366 [2024-10-11 22:59:01.436945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.366 [2024-10-11 22:59:01.436969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.366 [2024-10-11 22:59:01.436983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.366 [2024-10-11 22:59:01.436996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.366 [2024-10-11 22:59:01.437025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.366 qpair failed and we were unable to recover it. 00:35:58.366 [2024-10-11 22:59:01.446908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.366 [2024-10-11 22:59:01.447022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.447048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.447064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.447076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.447104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.456957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.457082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.457108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.457122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.457135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.457163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.466946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.467035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.467061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.467076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.467090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.467119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.477013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.477125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.477151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.477166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.477178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.477207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.487019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.487124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.487150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.487164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.487176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.487204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.497109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.497217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.497242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.497257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.497269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.497297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.507119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.507216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.507242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.507262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.507275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.507304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.517100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.517192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.517216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.517230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.517243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.517272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.527170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.527279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.527305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.527319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.527332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.527360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.537153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.537243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.537268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.537281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.537294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.537322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.547149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.547240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.547264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.547279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.547292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.547320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.557168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.557254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.557279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.557293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.557305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.557333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.567232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.567324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.567348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.567362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.567374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.367 [2024-10-11 22:59:01.567403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.367 qpair failed and we were unable to recover it. 00:35:58.367 [2024-10-11 22:59:01.577291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.367 [2024-10-11 22:59:01.577383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.367 [2024-10-11 22:59:01.577407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.367 [2024-10-11 22:59:01.577421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.367 [2024-10-11 22:59:01.577433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.368 [2024-10-11 22:59:01.577462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.368 qpair failed and we were unable to recover it. 00:35:58.368 [2024-10-11 22:59:01.587272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.368 [2024-10-11 22:59:01.587392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.368 [2024-10-11 22:59:01.587418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.368 [2024-10-11 22:59:01.587433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.368 [2024-10-11 22:59:01.587445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.368 [2024-10-11 22:59:01.587473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.368 qpair failed and we were unable to recover it. 00:35:58.368 [2024-10-11 22:59:01.597301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.368 [2024-10-11 22:59:01.597392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.368 [2024-10-11 22:59:01.597417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.368 [2024-10-11 22:59:01.597437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.368 [2024-10-11 22:59:01.597450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.368 [2024-10-11 22:59:01.597479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.368 qpair failed and we were unable to recover it. 00:35:58.368 [2024-10-11 22:59:01.607367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.368 [2024-10-11 22:59:01.607486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.368 [2024-10-11 22:59:01.607512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.368 [2024-10-11 22:59:01.607528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.368 [2024-10-11 22:59:01.607540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.368 [2024-10-11 22:59:01.607589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.368 qpair failed and we were unable to recover it. 00:35:58.368 [2024-10-11 22:59:01.617339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.368 [2024-10-11 22:59:01.617434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.368 [2024-10-11 22:59:01.617458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.368 [2024-10-11 22:59:01.617473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.368 [2024-10-11 22:59:01.617485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.368 [2024-10-11 22:59:01.617514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.368 qpair failed and we were unable to recover it. 00:35:58.368 [2024-10-11 22:59:01.627393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.368 [2024-10-11 22:59:01.627485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.368 [2024-10-11 22:59:01.627509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.368 [2024-10-11 22:59:01.627524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.368 [2024-10-11 22:59:01.627537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.368 [2024-10-11 22:59:01.627575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.368 qpair failed and we were unable to recover it. 00:35:58.627 [2024-10-11 22:59:01.637409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.627 [2024-10-11 22:59:01.637537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.627 [2024-10-11 22:59:01.637573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.627 [2024-10-11 22:59:01.637589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.627 [2024-10-11 22:59:01.637602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.627 [2024-10-11 22:59:01.637630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.627 qpair failed and we were unable to recover it. 00:35:58.627 [2024-10-11 22:59:01.647442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.627 [2024-10-11 22:59:01.647540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.627 [2024-10-11 22:59:01.647571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.627 [2024-10-11 22:59:01.647586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.627 [2024-10-11 22:59:01.647599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.627 [2024-10-11 22:59:01.647627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.627 qpair failed and we were unable to recover it. 00:35:58.627 [2024-10-11 22:59:01.657459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.627 [2024-10-11 22:59:01.657562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.627 [2024-10-11 22:59:01.657588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.627 [2024-10-11 22:59:01.657602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.627 [2024-10-11 22:59:01.657615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.627 [2024-10-11 22:59:01.657645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.627 qpair failed and we were unable to recover it. 00:35:58.627 [2024-10-11 22:59:01.667515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.627 [2024-10-11 22:59:01.667619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.627 [2024-10-11 22:59:01.667645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.627 [2024-10-11 22:59:01.667660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.627 [2024-10-11 22:59:01.667673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.627 [2024-10-11 22:59:01.667701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.627 qpair failed and we were unable to recover it. 00:35:58.627 [2024-10-11 22:59:01.677520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.627 [2024-10-11 22:59:01.677658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.627 [2024-10-11 22:59:01.677684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.627 [2024-10-11 22:59:01.677699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.627 [2024-10-11 22:59:01.677711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.627 [2024-10-11 22:59:01.677740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.627 qpair failed and we were unable to recover it. 00:35:58.627 [2024-10-11 22:59:01.687558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.627 [2024-10-11 22:59:01.687666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.627 [2024-10-11 22:59:01.687690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.627 [2024-10-11 22:59:01.687710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.627 [2024-10-11 22:59:01.687723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.627 [2024-10-11 22:59:01.687752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.627 qpair failed and we were unable to recover it. 00:35:58.627 [2024-10-11 22:59:01.697631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.627 [2024-10-11 22:59:01.697716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.627 [2024-10-11 22:59:01.697740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.627 [2024-10-11 22:59:01.697754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.627 [2024-10-11 22:59:01.697766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.627 [2024-10-11 22:59:01.697796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.627 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.707635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.707758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.707784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.707799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.707811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.707839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.717671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.717797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.717823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.717838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.717850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.717879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.727706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.727801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.727825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.727851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.727864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.727893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.737702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.737784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.737809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.737825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.737842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.737870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.747829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.747930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.747956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.747970] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.747983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.748021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.757764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.757856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.757880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.757894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.757906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.757934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.767859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.768001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.768025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.768039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.768051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.768086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.777832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.777934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.777959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.777978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.777992] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.778021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.787840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.787964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.787989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.788004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.788017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.788047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.797872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.798001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.798025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.798043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.798055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.798083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.807897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.808017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.808043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.808057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.808069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.808098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.817950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.818037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.818062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.818076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.818088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.818116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.827942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.828030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.828055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.828070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.828082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.828109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.837996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.838090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.838115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.628 [2024-10-11 22:59:01.838129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.628 [2024-10-11 22:59:01.838142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.628 [2024-10-11 22:59:01.838177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.628 qpair failed and we were unable to recover it. 00:35:58.628 [2024-10-11 22:59:01.848012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.628 [2024-10-11 22:59:01.848108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.628 [2024-10-11 22:59:01.848134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.629 [2024-10-11 22:59:01.848149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.629 [2024-10-11 22:59:01.848161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.629 [2024-10-11 22:59:01.848190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.629 qpair failed and we were unable to recover it. 00:35:58.629 [2024-10-11 22:59:01.858065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.629 [2024-10-11 22:59:01.858190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.629 [2024-10-11 22:59:01.858217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.629 [2024-10-11 22:59:01.858231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.629 [2024-10-11 22:59:01.858243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.629 [2024-10-11 22:59:01.858271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.629 qpair failed and we were unable to recover it. 00:35:58.629 [2024-10-11 22:59:01.868092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.629 [2024-10-11 22:59:01.868219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.629 [2024-10-11 22:59:01.868251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.629 [2024-10-11 22:59:01.868267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.629 [2024-10-11 22:59:01.868280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.629 [2024-10-11 22:59:01.868309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.629 qpair failed and we were unable to recover it. 00:35:58.629 [2024-10-11 22:59:01.878131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.629 [2024-10-11 22:59:01.878218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.629 [2024-10-11 22:59:01.878242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.629 [2024-10-11 22:59:01.878257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.629 [2024-10-11 22:59:01.878269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.629 [2024-10-11 22:59:01.878298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.629 qpair failed and we were unable to recover it. 00:35:58.629 [2024-10-11 22:59:01.888180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.629 [2024-10-11 22:59:01.888325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.629 [2024-10-11 22:59:01.888351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.629 [2024-10-11 22:59:01.888365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.629 [2024-10-11 22:59:01.888377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.629 [2024-10-11 22:59:01.888407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.629 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:01.898174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:01.898265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.888 [2024-10-11 22:59:01.898290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.888 [2024-10-11 22:59:01.898303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.888 [2024-10-11 22:59:01.898316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.888 [2024-10-11 22:59:01.898348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:01.908257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:01.908355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.888 [2024-10-11 22:59:01.908381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.888 [2024-10-11 22:59:01.908396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.888 [2024-10-11 22:59:01.908408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.888 [2024-10-11 22:59:01.908436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:01.918185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:01.918284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.888 [2024-10-11 22:59:01.918308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.888 [2024-10-11 22:59:01.918322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.888 [2024-10-11 22:59:01.918334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.888 [2024-10-11 22:59:01.918363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:01.928243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:01.928333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.888 [2024-10-11 22:59:01.928357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.888 [2024-10-11 22:59:01.928371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.888 [2024-10-11 22:59:01.928384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.888 [2024-10-11 22:59:01.928412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:01.938244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:01.938338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.888 [2024-10-11 22:59:01.938363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.888 [2024-10-11 22:59:01.938377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.888 [2024-10-11 22:59:01.938390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.888 [2024-10-11 22:59:01.938417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:01.948283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:01.948371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.888 [2024-10-11 22:59:01.948395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.888 [2024-10-11 22:59:01.948409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.888 [2024-10-11 22:59:01.948422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.888 [2024-10-11 22:59:01.948451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:01.958299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:01.958381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.888 [2024-10-11 22:59:01.958411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.888 [2024-10-11 22:59:01.958426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.888 [2024-10-11 22:59:01.958439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.888 [2024-10-11 22:59:01.958467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:01.968431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:01.968522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.888 [2024-10-11 22:59:01.968547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.888 [2024-10-11 22:59:01.968572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.888 [2024-10-11 22:59:01.968592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.888 [2024-10-11 22:59:01.968621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:01.978352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:01.978433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.888 [2024-10-11 22:59:01.978457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.888 [2024-10-11 22:59:01.978472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.888 [2024-10-11 22:59:01.978484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.888 [2024-10-11 22:59:01.978512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:01.988428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:01.988522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.888 [2024-10-11 22:59:01.988547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.888 [2024-10-11 22:59:01.988572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.888 [2024-10-11 22:59:01.988586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.888 [2024-10-11 22:59:01.988615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:01.998411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:01.998492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.888 [2024-10-11 22:59:01.998517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.888 [2024-10-11 22:59:01.998531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.888 [2024-10-11 22:59:01.998543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.888 [2024-10-11 22:59:01.998587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:02.008453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:02.008584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.888 [2024-10-11 22:59:02.008611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.888 [2024-10-11 22:59:02.008627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.888 [2024-10-11 22:59:02.008639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.888 [2024-10-11 22:59:02.008669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-10-11 22:59:02.018484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.888 [2024-10-11 22:59:02.018602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.018629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.018644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.018657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.018686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.028505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.028609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.028633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.028648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.028661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.028690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.038571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.038653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.038678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.038692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.038705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.038736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.048586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.048695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.048735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.048751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.048763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.048792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.058603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.058686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.058710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.058724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.058737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.058766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.068659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.068764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.068789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.068803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.068816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.068845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.078639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.078719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.078744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.078759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.078771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.078800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.088697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.088782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.088807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.088822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.088835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.088869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.098768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.098876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.098902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.098917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.098930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.098959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.108758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.108867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.108892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.108907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.108919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.108948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.118770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.118856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.118881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.118895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.118909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.118937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.128800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.128902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.128927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.128942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.128954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.128983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.138820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.138903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.138936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.138952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.138965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.138994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-10-11 22:59:02.148840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.889 [2024-10-11 22:59:02.148918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.889 [2024-10-11 22:59:02.148943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.889 [2024-10-11 22:59:02.148958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.889 [2024-10-11 22:59:02.148970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:58.889 [2024-10-11 22:59:02.148999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.889 qpair failed and we were unable to recover it. 00:35:59.148 [2024-10-11 22:59:02.158894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.148 [2024-10-11 22:59:02.158974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.148 [2024-10-11 22:59:02.158999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.148 [2024-10-11 22:59:02.159014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.148 [2024-10-11 22:59:02.159026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.148 [2024-10-11 22:59:02.159055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.148 qpair failed and we were unable to recover it. 00:35:59.148 [2024-10-11 22:59:02.168908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.168999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.169023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.169037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.169050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.169078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.179012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.179097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.179122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.179136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.179149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.179183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.188984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.189100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.189125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.189140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.189153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.189182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.199033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.199150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.199175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.199189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.199202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.199231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.209058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.209151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.209176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.209190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.209202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.209231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.219071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.219160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.219185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.219199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.219212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.219241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.229099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.229222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.229254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.229269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.229282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.229311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.239118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.239239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.239265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.239280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.239292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.239321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.249144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.249254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.249278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.249292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.249305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.249333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.259151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.259239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.259263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.259283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.259295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.259323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.269253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.269360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.269389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.269406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.269419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.269454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.279238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.279321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.279347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.279361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.279374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.279403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.289251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.289342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.289367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.289381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.289393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.289422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.299290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.299373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.299398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.299413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.149 [2024-10-11 22:59:02.299425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.149 [2024-10-11 22:59:02.299454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.149 qpair failed and we were unable to recover it. 00:35:59.149 [2024-10-11 22:59:02.309308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.149 [2024-10-11 22:59:02.309433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.149 [2024-10-11 22:59:02.309458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.149 [2024-10-11 22:59:02.309473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.150 [2024-10-11 22:59:02.309486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.150 [2024-10-11 22:59:02.309515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.150 qpair failed and we were unable to recover it. 00:35:59.150 [2024-10-11 22:59:02.319340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.150 [2024-10-11 22:59:02.319423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.150 [2024-10-11 22:59:02.319453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.150 [2024-10-11 22:59:02.319470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.150 [2024-10-11 22:59:02.319483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.150 [2024-10-11 22:59:02.319511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.150 qpair failed and we were unable to recover it. 00:35:59.150 [2024-10-11 22:59:02.329438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.150 [2024-10-11 22:59:02.329530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.150 [2024-10-11 22:59:02.329562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.150 [2024-10-11 22:59:02.329578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.150 [2024-10-11 22:59:02.329591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.150 [2024-10-11 22:59:02.329621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.150 qpair failed and we were unable to recover it. 00:35:59.150 [2024-10-11 22:59:02.339410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.150 [2024-10-11 22:59:02.339495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.150 [2024-10-11 22:59:02.339520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.150 [2024-10-11 22:59:02.339535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.150 [2024-10-11 22:59:02.339558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.150 [2024-10-11 22:59:02.339591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.150 qpair failed and we were unable to recover it. 00:35:59.150 [2024-10-11 22:59:02.349436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.150 [2024-10-11 22:59:02.349527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.150 [2024-10-11 22:59:02.349558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.150 [2024-10-11 22:59:02.349576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.150 [2024-10-11 22:59:02.349589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.150 [2024-10-11 22:59:02.349618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.150 qpair failed and we were unable to recover it. 00:35:59.150 [2024-10-11 22:59:02.359458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.150 [2024-10-11 22:59:02.359581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.150 [2024-10-11 22:59:02.359607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.150 [2024-10-11 22:59:02.359621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.150 [2024-10-11 22:59:02.359641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.150 [2024-10-11 22:59:02.359671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.150 qpair failed and we were unable to recover it. 00:35:59.150 [2024-10-11 22:59:02.369558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.150 [2024-10-11 22:59:02.369699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.150 [2024-10-11 22:59:02.369723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.150 [2024-10-11 22:59:02.369738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.150 [2024-10-11 22:59:02.369750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.150 [2024-10-11 22:59:02.369779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.150 qpair failed and we were unable to recover it. 00:35:59.150 [2024-10-11 22:59:02.379517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.150 [2024-10-11 22:59:02.379610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.150 [2024-10-11 22:59:02.379637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.150 [2024-10-11 22:59:02.379653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.150 [2024-10-11 22:59:02.379669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.150 [2024-10-11 22:59:02.379700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.150 qpair failed and we were unable to recover it. 00:35:59.150 [2024-10-11 22:59:02.389522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.150 [2024-10-11 22:59:02.389623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.150 [2024-10-11 22:59:02.389649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.150 [2024-10-11 22:59:02.389663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.150 [2024-10-11 22:59:02.389675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.150 [2024-10-11 22:59:02.389704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.150 qpair failed and we were unable to recover it. 00:35:59.150 [2024-10-11 22:59:02.399600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.150 [2024-10-11 22:59:02.399721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.150 [2024-10-11 22:59:02.399746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.150 [2024-10-11 22:59:02.399760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.150 [2024-10-11 22:59:02.399774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.150 [2024-10-11 22:59:02.399802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.150 qpair failed and we were unable to recover it. 00:35:59.150 [2024-10-11 22:59:02.409628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.150 [2024-10-11 22:59:02.409727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.150 [2024-10-11 22:59:02.409752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.150 [2024-10-11 22:59:02.409767] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.150 [2024-10-11 22:59:02.409779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.150 [2024-10-11 22:59:02.409807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.150 qpair failed and we were unable to recover it. 00:35:59.409 [2024-10-11 22:59:02.419640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.409 [2024-10-11 22:59:02.419729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.409 [2024-10-11 22:59:02.419754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.409 [2024-10-11 22:59:02.419769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.409 [2024-10-11 22:59:02.419782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.409 [2024-10-11 22:59:02.419810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.409 qpair failed and we were unable to recover it. 00:35:59.409 [2024-10-11 22:59:02.429657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.409 [2024-10-11 22:59:02.429739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.409 [2024-10-11 22:59:02.429763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.409 [2024-10-11 22:59:02.429778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.409 [2024-10-11 22:59:02.429791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.409 [2024-10-11 22:59:02.429819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.409 qpair failed and we were unable to recover it. 00:35:59.409 [2024-10-11 22:59:02.439707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.409 [2024-10-11 22:59:02.439794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.409 [2024-10-11 22:59:02.439818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.409 [2024-10-11 22:59:02.439833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.409 [2024-10-11 22:59:02.439845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.409 [2024-10-11 22:59:02.439875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.409 qpair failed and we were unable to recover it. 00:35:59.409 [2024-10-11 22:59:02.449901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.409 [2024-10-11 22:59:02.450000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.409 [2024-10-11 22:59:02.450025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.409 [2024-10-11 22:59:02.450039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.409 [2024-10-11 22:59:02.450057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.409 [2024-10-11 22:59:02.450087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.409 qpair failed and we were unable to recover it. 00:35:59.409 [2024-10-11 22:59:02.459768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.409 [2024-10-11 22:59:02.459901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.409 [2024-10-11 22:59:02.459930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.409 [2024-10-11 22:59:02.459947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.409 [2024-10-11 22:59:02.459960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.409 [2024-10-11 22:59:02.459990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.409 qpair failed and we were unable to recover it. 00:35:59.409 [2024-10-11 22:59:02.469766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.409 [2024-10-11 22:59:02.469853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.409 [2024-10-11 22:59:02.469878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.409 [2024-10-11 22:59:02.469892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.409 [2024-10-11 22:59:02.469905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.409 [2024-10-11 22:59:02.469935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.409 qpair failed and we were unable to recover it. 00:35:59.409 [2024-10-11 22:59:02.479823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.409 [2024-10-11 22:59:02.479911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.409 [2024-10-11 22:59:02.479938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.409 [2024-10-11 22:59:02.479955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.409 [2024-10-11 22:59:02.479968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.409 [2024-10-11 22:59:02.479997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.409 qpair failed and we were unable to recover it. 00:35:59.409 [2024-10-11 22:59:02.489831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.409 [2024-10-11 22:59:02.489921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.409 [2024-10-11 22:59:02.489946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.409 [2024-10-11 22:59:02.489960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.489973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.490002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.499862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.499959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.499983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.499998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.500011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.500039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.509896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.509984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.510010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.510025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.510038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.510066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.519902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.519983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.520008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.520023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.520036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.520064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.530001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.530092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.530118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.530133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.530145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.530173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.540014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.540141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.540166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.540180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.540199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.540229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.550070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.550163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.550188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.550203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.550215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.550244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.560076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.560200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.560225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.560239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.560252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.560280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.570163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.570270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.570295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.570309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.570322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.570352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.580093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.580181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.580205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.580220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.580233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.580261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.590172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.590260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.590285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.590300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.590313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.590342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.600194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.600279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.600304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.600319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.600331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.600359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.610205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.610293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.610318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.610332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.610344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.610372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.620242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.620358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.620383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.620396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.620409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.620438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.630275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.410 [2024-10-11 22:59:02.630374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.410 [2024-10-11 22:59:02.630400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.410 [2024-10-11 22:59:02.630414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.410 [2024-10-11 22:59:02.630432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.410 [2024-10-11 22:59:02.630462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.410 qpair failed and we were unable to recover it. 00:35:59.410 [2024-10-11 22:59:02.640358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.411 [2024-10-11 22:59:02.640451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.411 [2024-10-11 22:59:02.640477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.411 [2024-10-11 22:59:02.640491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.411 [2024-10-11 22:59:02.640504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.411 [2024-10-11 22:59:02.640535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.411 qpair failed and we were unable to recover it. 00:35:59.411 [2024-10-11 22:59:02.650340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.411 [2024-10-11 22:59:02.650437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.411 [2024-10-11 22:59:02.650462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.411 [2024-10-11 22:59:02.650476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.411 [2024-10-11 22:59:02.650489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.411 [2024-10-11 22:59:02.650518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.411 qpair failed and we were unable to recover it. 00:35:59.411 [2024-10-11 22:59:02.660367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.411 [2024-10-11 22:59:02.660453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.411 [2024-10-11 22:59:02.660478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.411 [2024-10-11 22:59:02.660493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.411 [2024-10-11 22:59:02.660505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.411 [2024-10-11 22:59:02.660536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.411 qpair failed and we were unable to recover it. 00:35:59.411 [2024-10-11 22:59:02.670383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.411 [2024-10-11 22:59:02.670493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.411 [2024-10-11 22:59:02.670518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.411 [2024-10-11 22:59:02.670533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.411 [2024-10-11 22:59:02.670546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.411 [2024-10-11 22:59:02.670584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.411 qpair failed and we were unable to recover it. 00:35:59.678 [2024-10-11 22:59:02.680423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.678 [2024-10-11 22:59:02.680562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.678 [2024-10-11 22:59:02.680587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.678 [2024-10-11 22:59:02.680602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.678 [2024-10-11 22:59:02.680614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.678 [2024-10-11 22:59:02.680643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.678 qpair failed and we were unable to recover it. 00:35:59.678 [2024-10-11 22:59:02.690431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.678 [2024-10-11 22:59:02.690524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.678 [2024-10-11 22:59:02.690555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.678 [2024-10-11 22:59:02.690572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.678 [2024-10-11 22:59:02.690585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.678 [2024-10-11 22:59:02.690614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.678 qpair failed and we were unable to recover it. 00:35:59.678 [2024-10-11 22:59:02.700457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.678 [2024-10-11 22:59:02.700538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.678 [2024-10-11 22:59:02.700572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.678 [2024-10-11 22:59:02.700588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.678 [2024-10-11 22:59:02.700600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.678 [2024-10-11 22:59:02.700629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.678 qpair failed and we were unable to recover it. 00:35:59.678 [2024-10-11 22:59:02.710471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.678 [2024-10-11 22:59:02.710573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.678 [2024-10-11 22:59:02.710599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.710613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.710625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.710654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.720511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.720605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.720631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.720650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.720663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.720693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.730575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.730675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.730699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.730713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.730725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.730754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.740567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.740656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.740681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.740695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.740708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.740736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.750609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.750694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.750719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.750734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.750747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.750775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.760649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.760741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.760770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.760787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.760800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.760830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.770723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.770828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.770853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.770867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.770879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.770909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.780707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.780842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.780868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.780883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.780895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.780923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.790830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.790961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.790986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.791001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.791014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.791042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.800830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.800912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.800937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.800951] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.800964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.800992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.810822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.810942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.810967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.810986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.811000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.811028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.820892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.820987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.821012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.821027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.821040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.821069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.830828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.830925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.830949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.830964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.830977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.831005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.840890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.841019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.841044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.841058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.841071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.841099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.679 [2024-10-11 22:59:02.850907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.679 [2024-10-11 22:59:02.851001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.679 [2024-10-11 22:59:02.851026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.679 [2024-10-11 22:59:02.851040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.679 [2024-10-11 22:59:02.851053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.679 [2024-10-11 22:59:02.851081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.679 qpair failed and we were unable to recover it. 00:35:59.680 [2024-10-11 22:59:02.860952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.680 [2024-10-11 22:59:02.861041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.680 [2024-10-11 22:59:02.861074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.680 [2024-10-11 22:59:02.861089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.680 [2024-10-11 22:59:02.861101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.680 [2024-10-11 22:59:02.861130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.680 qpair failed and we were unable to recover it. 00:35:59.680 [2024-10-11 22:59:02.870933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.680 [2024-10-11 22:59:02.871063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.680 [2024-10-11 22:59:02.871089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.680 [2024-10-11 22:59:02.871104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.680 [2024-10-11 22:59:02.871117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.680 [2024-10-11 22:59:02.871146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.680 qpair failed and we were unable to recover it. 00:35:59.680 [2024-10-11 22:59:02.880965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.680 [2024-10-11 22:59:02.881088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.680 [2024-10-11 22:59:02.881113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.680 [2024-10-11 22:59:02.881127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.680 [2024-10-11 22:59:02.881139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.680 [2024-10-11 22:59:02.881169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.680 qpair failed and we were unable to recover it. 00:35:59.680 [2024-10-11 22:59:02.891005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.680 [2024-10-11 22:59:02.891100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.680 [2024-10-11 22:59:02.891125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.680 [2024-10-11 22:59:02.891138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.680 [2024-10-11 22:59:02.891151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.680 [2024-10-11 22:59:02.891180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.680 qpair failed and we were unable to recover it. 00:35:59.680 [2024-10-11 22:59:02.901017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.680 [2024-10-11 22:59:02.901129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.680 [2024-10-11 22:59:02.901154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.680 [2024-10-11 22:59:02.901174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.680 [2024-10-11 22:59:02.901189] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.680 [2024-10-11 22:59:02.901224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.680 qpair failed and we were unable to recover it. 00:35:59.680 [2024-10-11 22:59:02.911100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.680 [2024-10-11 22:59:02.911184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.680 [2024-10-11 22:59:02.911210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.680 [2024-10-11 22:59:02.911224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.680 [2024-10-11 22:59:02.911236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.680 [2024-10-11 22:59:02.911265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.680 qpair failed and we were unable to recover it. 00:35:59.680 [2024-10-11 22:59:02.921067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.680 [2024-10-11 22:59:02.921152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.680 [2024-10-11 22:59:02.921177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.680 [2024-10-11 22:59:02.921191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.680 [2024-10-11 22:59:02.921203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.680 [2024-10-11 22:59:02.921232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.680 qpair failed and we were unable to recover it. 00:35:59.680 [2024-10-11 22:59:02.931118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.680 [2024-10-11 22:59:02.931233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.680 [2024-10-11 22:59:02.931258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.680 [2024-10-11 22:59:02.931272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.680 [2024-10-11 22:59:02.931285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:35:59.680 [2024-10-11 22:59:02.931313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.680 qpair failed and we were unable to recover it. 00:36:00.001 [2024-10-11 22:59:02.941136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.001 [2024-10-11 22:59:02.941228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.001 [2024-10-11 22:59:02.941252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.001 [2024-10-11 22:59:02.941266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.001 [2024-10-11 22:59:02.941279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.001 [2024-10-11 22:59:02.941307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.001 qpair failed and we were unable to recover it. 00:36:00.001 [2024-10-11 22:59:02.951233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.001 [2024-10-11 22:59:02.951342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.001 [2024-10-11 22:59:02.951366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.001 [2024-10-11 22:59:02.951380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.001 [2024-10-11 22:59:02.951392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.001 [2024-10-11 22:59:02.951421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.001 qpair failed and we were unable to recover it. 00:36:00.001 [2024-10-11 22:59:02.961311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.001 [2024-10-11 22:59:02.961410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.001 [2024-10-11 22:59:02.961435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.001 [2024-10-11 22:59:02.961449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.001 [2024-10-11 22:59:02.961461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.001 [2024-10-11 22:59:02.961490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.001 qpair failed and we were unable to recover it. 00:36:00.001 [2024-10-11 22:59:02.971226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.001 [2024-10-11 22:59:02.971312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.001 [2024-10-11 22:59:02.971336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.001 [2024-10-11 22:59:02.971350] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.001 [2024-10-11 22:59:02.971362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.001 [2024-10-11 22:59:02.971390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.001 qpair failed and we were unable to recover it. 00:36:00.001 [2024-10-11 22:59:02.981228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.001 [2024-10-11 22:59:02.981311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.001 [2024-10-11 22:59:02.981335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.001 [2024-10-11 22:59:02.981349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.001 [2024-10-11 22:59:02.981362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.001 [2024-10-11 22:59:02.981390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.001 qpair failed and we were unable to recover it. 00:36:00.001 [2024-10-11 22:59:02.991287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.001 [2024-10-11 22:59:02.991388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.001 [2024-10-11 22:59:02.991412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.001 [2024-10-11 22:59:02.991432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.001 [2024-10-11 22:59:02.991445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.001 [2024-10-11 22:59:02.991477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.001 qpair failed and we were unable to recover it. 00:36:00.001 [2024-10-11 22:59:03.001282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.001 [2024-10-11 22:59:03.001374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.001 [2024-10-11 22:59:03.001399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.001 [2024-10-11 22:59:03.001413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.001 [2024-10-11 22:59:03.001426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.001 [2024-10-11 22:59:03.001455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.001 qpair failed and we were unable to recover it. 00:36:00.001 [2024-10-11 22:59:03.011336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.001 [2024-10-11 22:59:03.011423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.001 [2024-10-11 22:59:03.011447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.001 [2024-10-11 22:59:03.011462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.001 [2024-10-11 22:59:03.011474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.011502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.021341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.021457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.021483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.021498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.021510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.021539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.031369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.031477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.031503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.031518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.031530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.031567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.041461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.041558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.041583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.041597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.041609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.041638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.051454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.051545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.051577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.051592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.051604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.051633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.061504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.061603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.061629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.061643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.061655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.061683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.071501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.071596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.071621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.071636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.071649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.071677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.081545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.081638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.081669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.081684] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.081697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.081725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.091585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.091674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.091699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.091713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.091726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.091754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.101578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.101656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.101681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.101695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.101709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.101738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.111610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.111696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.111721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.111734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.111747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.111775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.121640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.121723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.121747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.121761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.121773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.121802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.131674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.131767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.131791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.131805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.131818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.131846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.141723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.141851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.141878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.141892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.141905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.141933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.151726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.151850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.151876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.151892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.151904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.151933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.161763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.161856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.161891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.161906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.161918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.161947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.171780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.171875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.171904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.171919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.171932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.171960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.181871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.181990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.182016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.182031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.182043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.182071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.191852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.191979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.192006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.192020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.192032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.192061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.201892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.202011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.202037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.202052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.202064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.202092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.211898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.211993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.212017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.212031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.212043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.212077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.221906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.222007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.222033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.222047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.222059] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.222087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.231958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.232051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.232076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.232090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.232103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.232132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.002 [2024-10-11 22:59:03.241962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.002 [2024-10-11 22:59:03.242057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.002 [2024-10-11 22:59:03.242082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.002 [2024-10-11 22:59:03.242096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.002 [2024-10-11 22:59:03.242109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.002 [2024-10-11 22:59:03.242138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.002 qpair failed and we were unable to recover it. 00:36:00.285 [2024-10-11 22:59:03.252022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.285 [2024-10-11 22:59:03.252120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.285 [2024-10-11 22:59:03.252154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.285 [2024-10-11 22:59:03.252168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.285 [2024-10-11 22:59:03.252180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.285 [2024-10-11 22:59:03.252218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.285 qpair failed and we were unable to recover it. 00:36:00.285 [2024-10-11 22:59:03.262041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.285 [2024-10-11 22:59:03.262131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.285 [2024-10-11 22:59:03.262162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.285 [2024-10-11 22:59:03.262176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.285 [2024-10-11 22:59:03.262188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.285 [2024-10-11 22:59:03.262217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.285 qpair failed and we were unable to recover it. 00:36:00.285 [2024-10-11 22:59:03.272102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.285 [2024-10-11 22:59:03.272225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.285 [2024-10-11 22:59:03.272251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.285 [2024-10-11 22:59:03.272266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.285 [2024-10-11 22:59:03.272278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.285 [2024-10-11 22:59:03.272306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.285 qpair failed and we were unable to recover it. 00:36:00.285 [2024-10-11 22:59:03.282138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.285 [2024-10-11 22:59:03.282253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.285 [2024-10-11 22:59:03.282279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.285 [2024-10-11 22:59:03.282293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.285 [2024-10-11 22:59:03.282305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.285 [2024-10-11 22:59:03.282333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.285 qpair failed and we were unable to recover it. 00:36:00.285 [2024-10-11 22:59:03.292124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.285 [2024-10-11 22:59:03.292220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.285 [2024-10-11 22:59:03.292246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.285 [2024-10-11 22:59:03.292260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.285 [2024-10-11 22:59:03.292272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.285 [2024-10-11 22:59:03.292300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.285 qpair failed and we were unable to recover it. 00:36:00.285 [2024-10-11 22:59:03.302145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.285 [2024-10-11 22:59:03.302237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.285 [2024-10-11 22:59:03.302272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.285 [2024-10-11 22:59:03.302285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.285 [2024-10-11 22:59:03.302298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.285 [2024-10-11 22:59:03.302332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.285 qpair failed and we were unable to recover it. 00:36:00.285 [2024-10-11 22:59:03.312185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.285 [2024-10-11 22:59:03.312273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.285 [2024-10-11 22:59:03.312298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.285 [2024-10-11 22:59:03.312313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.285 [2024-10-11 22:59:03.312326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.285 [2024-10-11 22:59:03.312354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.285 qpair failed and we were unable to recover it. 00:36:00.285 [2024-10-11 22:59:03.322193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.285 [2024-10-11 22:59:03.322282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.285 [2024-10-11 22:59:03.322307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.285 [2024-10-11 22:59:03.322320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.285 [2024-10-11 22:59:03.322334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.285 [2024-10-11 22:59:03.322363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.285 qpair failed and we were unable to recover it. 00:36:00.285 [2024-10-11 22:59:03.332249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.285 [2024-10-11 22:59:03.332369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.285 [2024-10-11 22:59:03.332395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.332410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.332422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.332450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.342297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.342389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.342415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.342429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.342441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.342469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.352310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.352405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.352437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.352452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.352464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.352494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.362334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.362425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.362449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.362464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.362476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.362504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.372441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.372576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.372603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.372617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.372629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.372658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.382368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.382460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.382484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.382498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.382510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.382538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.392410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.392498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.392525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.392543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.392568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.392605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.402470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.402567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.402593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.402607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.402620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.402648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.412475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.412572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.412597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.412612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.412624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.412653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.422493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.422588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.422613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.422627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.422639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.422667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.432535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.432645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.432686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.432703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.432716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.432746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.442557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.442656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.442691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.442708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.442720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.442750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.452579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.452672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.452696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.452710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.452723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.452752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.462593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.462682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.462707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.462721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.462734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.462763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.286 [2024-10-11 22:59:03.472627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.286 [2024-10-11 22:59:03.472717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.286 [2024-10-11 22:59:03.472743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.286 [2024-10-11 22:59:03.472758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.286 [2024-10-11 22:59:03.472770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.286 [2024-10-11 22:59:03.472799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.286 qpair failed and we were unable to recover it. 00:36:00.287 [2024-10-11 22:59:03.482698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.287 [2024-10-11 22:59:03.482781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.287 [2024-10-11 22:59:03.482807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.287 [2024-10-11 22:59:03.482822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.287 [2024-10-11 22:59:03.482834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.287 [2024-10-11 22:59:03.482868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.287 qpair failed and we were unable to recover it. 00:36:00.287 [2024-10-11 22:59:03.492694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.287 [2024-10-11 22:59:03.492781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.287 [2024-10-11 22:59:03.492805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.287 [2024-10-11 22:59:03.492820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.287 [2024-10-11 22:59:03.492832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.287 [2024-10-11 22:59:03.492861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.287 qpair failed and we were unable to recover it. 00:36:00.287 [2024-10-11 22:59:03.502755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.287 [2024-10-11 22:59:03.502872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.287 [2024-10-11 22:59:03.502898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.287 [2024-10-11 22:59:03.502913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.287 [2024-10-11 22:59:03.502926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.287 [2024-10-11 22:59:03.502956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.287 qpair failed and we were unable to recover it. 00:36:00.287 [2024-10-11 22:59:03.512766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.287 [2024-10-11 22:59:03.512853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.287 [2024-10-11 22:59:03.512878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.287 [2024-10-11 22:59:03.512892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.287 [2024-10-11 22:59:03.512904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.287 [2024-10-11 22:59:03.512933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.287 qpair failed and we were unable to recover it. 00:36:00.287 [2024-10-11 22:59:03.522816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.287 [2024-10-11 22:59:03.522938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.287 [2024-10-11 22:59:03.522968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.287 [2024-10-11 22:59:03.522985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.287 [2024-10-11 22:59:03.522998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.287 [2024-10-11 22:59:03.523028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.287 qpair failed and we were unable to recover it. 00:36:00.287 [2024-10-11 22:59:03.532800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.287 [2024-10-11 22:59:03.532896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.287 [2024-10-11 22:59:03.532928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.287 [2024-10-11 22:59:03.532944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.287 [2024-10-11 22:59:03.532957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.287 [2024-10-11 22:59:03.532986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.287 qpair failed and we were unable to recover it. 00:36:00.287 [2024-10-11 22:59:03.542832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.287 [2024-10-11 22:59:03.542921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.287 [2024-10-11 22:59:03.542947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.287 [2024-10-11 22:59:03.542961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.287 [2024-10-11 22:59:03.542973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.287 [2024-10-11 22:59:03.543001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.287 qpair failed and we were unable to recover it. 00:36:00.546 [2024-10-11 22:59:03.552907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.546 [2024-10-11 22:59:03.553031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.546 [2024-10-11 22:59:03.553057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.546 [2024-10-11 22:59:03.553073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.546 [2024-10-11 22:59:03.553086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.546 [2024-10-11 22:59:03.553114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.546 qpair failed and we were unable to recover it. 00:36:00.546 [2024-10-11 22:59:03.562878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.546 [2024-10-11 22:59:03.562970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.546 [2024-10-11 22:59:03.562994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.546 [2024-10-11 22:59:03.563009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.546 [2024-10-11 22:59:03.563021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.546 [2024-10-11 22:59:03.563050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.546 qpair failed and we were unable to recover it. 00:36:00.546 [2024-10-11 22:59:03.572935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.546 [2024-10-11 22:59:03.573021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.546 [2024-10-11 22:59:03.573046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.546 [2024-10-11 22:59:03.573060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.546 [2024-10-11 22:59:03.573078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.546 [2024-10-11 22:59:03.573119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.546 qpair failed and we were unable to recover it. 00:36:00.546 [2024-10-11 22:59:03.582987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.546 [2024-10-11 22:59:03.583074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.546 [2024-10-11 22:59:03.583098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.546 [2024-10-11 22:59:03.583112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.546 [2024-10-11 22:59:03.583125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.546 [2024-10-11 22:59:03.583153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.546 qpair failed and we were unable to recover it. 00:36:00.546 [2024-10-11 22:59:03.593018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.546 [2024-10-11 22:59:03.593110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.546 [2024-10-11 22:59:03.593136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.546 [2024-10-11 22:59:03.593151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.546 [2024-10-11 22:59:03.593164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.546 [2024-10-11 22:59:03.593192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.546 qpair failed and we were unable to recover it. 00:36:00.546 [2024-10-11 22:59:03.603042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.546 [2024-10-11 22:59:03.603132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.546 [2024-10-11 22:59:03.603156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.546 [2024-10-11 22:59:03.603170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.603182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.603210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.613089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.613185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.613210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.613229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.613243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.613272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.623059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.623198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.623225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.623239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.623252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.623280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.633081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.633163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.633188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.633202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.633215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.633243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.643101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.643189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.643213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.643227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.643240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.643269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.653191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.653334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.653360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.653375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.653387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.653416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.663215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.663312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.663337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.663352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.663381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.663411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.673224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.673317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.673343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.673358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.673370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.673399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.683291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.683380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.683404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.683419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.683431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.683460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.693266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.693359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.693384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.693398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.693411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.693439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.703269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.703363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.703388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.703403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.703415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.703443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.713341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.713453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.713480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.713494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.713507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.713535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.723353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.723476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.723502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.723517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.723530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.723568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.733415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.733503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.733527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.733541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.733563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.733594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.743415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.547 [2024-10-11 22:59:03.743504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.547 [2024-10-11 22:59:03.743529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.547 [2024-10-11 22:59:03.743543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.547 [2024-10-11 22:59:03.743571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.547 [2024-10-11 22:59:03.743602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.547 qpair failed and we were unable to recover it. 00:36:00.547 [2024-10-11 22:59:03.753454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.548 [2024-10-11 22:59:03.753540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.548 [2024-10-11 22:59:03.753573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.548 [2024-10-11 22:59:03.753589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.548 [2024-10-11 22:59:03.753607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.548 [2024-10-11 22:59:03.753639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.548 qpair failed and we were unable to recover it. 00:36:00.548 [2024-10-11 22:59:03.763462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.548 [2024-10-11 22:59:03.763602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.548 [2024-10-11 22:59:03.763633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.548 [2024-10-11 22:59:03.763650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.548 [2024-10-11 22:59:03.763663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.548 [2024-10-11 22:59:03.763693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.548 qpair failed and we were unable to recover it. 00:36:00.548 [2024-10-11 22:59:03.773562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.548 [2024-10-11 22:59:03.773711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.548 [2024-10-11 22:59:03.773737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.548 [2024-10-11 22:59:03.773751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.548 [2024-10-11 22:59:03.773763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.548 [2024-10-11 22:59:03.773792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.548 qpair failed and we were unable to recover it. 00:36:00.548 [2024-10-11 22:59:03.783528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.548 [2024-10-11 22:59:03.783628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.548 [2024-10-11 22:59:03.783653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.548 [2024-10-11 22:59:03.783667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.548 [2024-10-11 22:59:03.783680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.548 [2024-10-11 22:59:03.783709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.548 qpair failed and we were unable to recover it. 00:36:00.548 [2024-10-11 22:59:03.793584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.548 [2024-10-11 22:59:03.793683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.548 [2024-10-11 22:59:03.793709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.548 [2024-10-11 22:59:03.793724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.548 [2024-10-11 22:59:03.793736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.548 [2024-10-11 22:59:03.793765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.548 qpair failed and we were unable to recover it. 00:36:00.548 [2024-10-11 22:59:03.803582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.548 [2024-10-11 22:59:03.803692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.548 [2024-10-11 22:59:03.803719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.548 [2024-10-11 22:59:03.803733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.548 [2024-10-11 22:59:03.803745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.548 [2024-10-11 22:59:03.803774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.548 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.813700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.813795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.808 [2024-10-11 22:59:03.813819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.808 [2024-10-11 22:59:03.813833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.808 [2024-10-11 22:59:03.813845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.808 [2024-10-11 22:59:03.813873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.808 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.823660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.823778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.808 [2024-10-11 22:59:03.823803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.808 [2024-10-11 22:59:03.823817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.808 [2024-10-11 22:59:03.823830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.808 [2024-10-11 22:59:03.823858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.808 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.833730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.833841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.808 [2024-10-11 22:59:03.833868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.808 [2024-10-11 22:59:03.833883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.808 [2024-10-11 22:59:03.833895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.808 [2024-10-11 22:59:03.833923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.808 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.843791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.843887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.808 [2024-10-11 22:59:03.843913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.808 [2024-10-11 22:59:03.843927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.808 [2024-10-11 22:59:03.843945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.808 [2024-10-11 22:59:03.843975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.808 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.853731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.853851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.808 [2024-10-11 22:59:03.853877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.808 [2024-10-11 22:59:03.853892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.808 [2024-10-11 22:59:03.853904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.808 [2024-10-11 22:59:03.853933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.808 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.863808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.863901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.808 [2024-10-11 22:59:03.863925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.808 [2024-10-11 22:59:03.863939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.808 [2024-10-11 22:59:03.863951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.808 [2024-10-11 22:59:03.863981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.808 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.873809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.873904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.808 [2024-10-11 22:59:03.873929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.808 [2024-10-11 22:59:03.873944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.808 [2024-10-11 22:59:03.873956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.808 [2024-10-11 22:59:03.873985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.808 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.883822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.883916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.808 [2024-10-11 22:59:03.883942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.808 [2024-10-11 22:59:03.883957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.808 [2024-10-11 22:59:03.883969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.808 [2024-10-11 22:59:03.883997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.808 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.893847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.893941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.808 [2024-10-11 22:59:03.893966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.808 [2024-10-11 22:59:03.893980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.808 [2024-10-11 22:59:03.893992] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.808 [2024-10-11 22:59:03.894020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.808 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.903890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.903980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.808 [2024-10-11 22:59:03.904004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.808 [2024-10-11 22:59:03.904018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.808 [2024-10-11 22:59:03.904031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.808 [2024-10-11 22:59:03.904059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.808 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.913978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.914111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.808 [2024-10-11 22:59:03.914138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.808 [2024-10-11 22:59:03.914152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.808 [2024-10-11 22:59:03.914165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.808 [2024-10-11 22:59:03.914193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.808 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.923906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.923995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.808 [2024-10-11 22:59:03.924019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.808 [2024-10-11 22:59:03.924034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.808 [2024-10-11 22:59:03.924046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.808 [2024-10-11 22:59:03.924075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.808 qpair failed and we were unable to recover it. 00:36:00.808 [2024-10-11 22:59:03.933954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.808 [2024-10-11 22:59:03.934045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:03.934069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:03.934088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:03.934102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:03.934130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:03.943969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:03.944058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:03.944083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:03.944098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:03.944110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:03.944138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:03.953997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:03.954084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:03.954109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:03.954123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:03.954135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:03.954164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:03.964014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:03.964104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:03.964129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:03.964143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:03.964155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:03.964184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:03.974153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:03.974242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:03.974267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:03.974280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:03.974293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:03.974320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:03.984080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:03.984179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:03.984205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:03.984220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:03.984232] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:03.984260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:03.994141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:03.994239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:03.994265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:03.994279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:03.994292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:03.994320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:04.004160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:04.004253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:04.004277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:04.004292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:04.004304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:04.004333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:04.014185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:04.014299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:04.014325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:04.014340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:04.014352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:04.014382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:04.024213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:04.024308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:04.024332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:04.024351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:04.024365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:04.024393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:04.034278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:04.034384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:04.034409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:04.034425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:04.034437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:04.034466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:04.044242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:04.044369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:04.044401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:04.044416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:04.044429] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:04.044457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:04.054284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:04.054381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:04.054406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:04.054419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:04.054432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:04.054461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:04.064287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:04.064377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:04.064401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:04.064415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:04.064427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.809 [2024-10-11 22:59:04.064455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.809 qpair failed and we were unable to recover it. 00:36:00.809 [2024-10-11 22:59:04.074357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.809 [2024-10-11 22:59:04.074440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.809 [2024-10-11 22:59:04.074464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.809 [2024-10-11 22:59:04.074478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.809 [2024-10-11 22:59:04.074491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:00.810 [2024-10-11 22:59:04.074520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.810 qpair failed and we were unable to recover it. 00:36:01.068 [2024-10-11 22:59:04.084353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.068 [2024-10-11 22:59:04.084448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.068 [2024-10-11 22:59:04.084473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.068 [2024-10-11 22:59:04.084487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.068 [2024-10-11 22:59:04.084499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.068 [2024-10-11 22:59:04.084528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.068 qpair failed and we were unable to recover it. 00:36:01.068 [2024-10-11 22:59:04.094389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.068 [2024-10-11 22:59:04.094498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.094522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.094536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.094555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.094587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.104403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.104531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.104567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.104584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.104596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.104625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.114442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.114533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.114567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.114588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.114601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.114631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.124499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.124598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.124623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.124637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.124649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.124679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.134513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.134610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.134635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.134650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.134662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.134691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.144524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.144619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.144644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.144659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.144671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.144699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.154583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.154687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.154715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.154732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.154745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.154776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.164609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.164692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.164717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.164731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.164743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.164772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.174625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.174717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.174742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.174756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.174768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.174797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.184647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.184746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.184770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.184784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.184797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.184825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.194652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.194740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.194765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.194780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.194793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.194822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.204687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.204767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.204792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.204812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.204826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.204855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.214741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.214829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.214853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.214868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.214880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.214908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.224774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.224900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.224925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.224939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.069 [2024-10-11 22:59:04.224952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.069 [2024-10-11 22:59:04.224980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.069 qpair failed and we were unable to recover it. 00:36:01.069 [2024-10-11 22:59:04.234800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.069 [2024-10-11 22:59:04.234893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.069 [2024-10-11 22:59:04.234918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.069 [2024-10-11 22:59:04.234932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.070 [2024-10-11 22:59:04.234945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.070 [2024-10-11 22:59:04.234973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-10-11 22:59:04.244852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.070 [2024-10-11 22:59:04.244978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.070 [2024-10-11 22:59:04.245003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.070 [2024-10-11 22:59:04.245018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.070 [2024-10-11 22:59:04.245031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.070 [2024-10-11 22:59:04.245059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-10-11 22:59:04.254865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.070 [2024-10-11 22:59:04.254958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.070 [2024-10-11 22:59:04.254983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.070 [2024-10-11 22:59:04.254998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.070 [2024-10-11 22:59:04.255010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.070 [2024-10-11 22:59:04.255039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-10-11 22:59:04.264873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.070 [2024-10-11 22:59:04.264997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.070 [2024-10-11 22:59:04.265025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.070 [2024-10-11 22:59:04.265040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.070 [2024-10-11 22:59:04.265053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.070 [2024-10-11 22:59:04.265082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-10-11 22:59:04.274900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.070 [2024-10-11 22:59:04.274994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.070 [2024-10-11 22:59:04.275018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.070 [2024-10-11 22:59:04.275033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.070 [2024-10-11 22:59:04.275045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.070 [2024-10-11 22:59:04.275074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-10-11 22:59:04.284932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.070 [2024-10-11 22:59:04.285016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.070 [2024-10-11 22:59:04.285040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.070 [2024-10-11 22:59:04.285055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.070 [2024-10-11 22:59:04.285067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.070 [2024-10-11 22:59:04.285095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-10-11 22:59:04.294943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.070 [2024-10-11 22:59:04.295028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.070 [2024-10-11 22:59:04.295058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.070 [2024-10-11 22:59:04.295074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.070 [2024-10-11 22:59:04.295087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.070 [2024-10-11 22:59:04.295116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-10-11 22:59:04.305004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.070 [2024-10-11 22:59:04.305089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.070 [2024-10-11 22:59:04.305114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.070 [2024-10-11 22:59:04.305128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.070 [2024-10-11 22:59:04.305141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.070 [2024-10-11 22:59:04.305169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-10-11 22:59:04.315022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.070 [2024-10-11 22:59:04.315106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.070 [2024-10-11 22:59:04.315131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.070 [2024-10-11 22:59:04.315145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.070 [2024-10-11 22:59:04.315158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.070 [2024-10-11 22:59:04.315187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-10-11 22:59:04.325023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.070 [2024-10-11 22:59:04.325120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.070 [2024-10-11 22:59:04.325145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.070 [2024-10-11 22:59:04.325160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.070 [2024-10-11 22:59:04.325172] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.070 [2024-10-11 22:59:04.325201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-10-11 22:59:04.335104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.070 [2024-10-11 22:59:04.335216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.070 [2024-10-11 22:59:04.335245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.070 [2024-10-11 22:59:04.335262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.070 [2024-10-11 22:59:04.335276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.070 [2024-10-11 22:59:04.335305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.330 [2024-10-11 22:59:04.345114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.330 [2024-10-11 22:59:04.345205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.330 [2024-10-11 22:59:04.345230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.330 [2024-10-11 22:59:04.345244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.330 [2024-10-11 22:59:04.345257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.330 [2024-10-11 22:59:04.345286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.330 qpair failed and we were unable to recover it. 00:36:01.330 [2024-10-11 22:59:04.355193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.330 [2024-10-11 22:59:04.355283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.330 [2024-10-11 22:59:04.355308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.330 [2024-10-11 22:59:04.355323] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.330 [2024-10-11 22:59:04.355336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.330 [2024-10-11 22:59:04.355365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.330 qpair failed and we were unable to recover it. 00:36:01.330 [2024-10-11 22:59:04.365161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.330 [2024-10-11 22:59:04.365281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.330 [2024-10-11 22:59:04.365306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.330 [2024-10-11 22:59:04.365320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.330 [2024-10-11 22:59:04.365332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.330 [2024-10-11 22:59:04.365361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.330 qpair failed and we were unable to recover it. 00:36:01.330 [2024-10-11 22:59:04.375178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.330 [2024-10-11 22:59:04.375269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.330 [2024-10-11 22:59:04.375293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.330 [2024-10-11 22:59:04.375308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.330 [2024-10-11 22:59:04.375321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.330 [2024-10-11 22:59:04.375350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.330 qpair failed and we were unable to recover it. 00:36:01.330 [2024-10-11 22:59:04.385208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.330 [2024-10-11 22:59:04.385318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.330 [2024-10-11 22:59:04.385349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.330 [2024-10-11 22:59:04.385365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.330 [2024-10-11 22:59:04.385378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.330 [2024-10-11 22:59:04.385406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.330 qpair failed and we were unable to recover it. 00:36:01.330 [2024-10-11 22:59:04.395430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.330 [2024-10-11 22:59:04.395521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.330 [2024-10-11 22:59:04.395546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.330 [2024-10-11 22:59:04.395569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.330 [2024-10-11 22:59:04.395593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.330 [2024-10-11 22:59:04.395621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.330 qpair failed and we were unable to recover it. 00:36:01.330 [2024-10-11 22:59:04.405288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.330 [2024-10-11 22:59:04.405372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.330 [2024-10-11 22:59:04.405397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.330 [2024-10-11 22:59:04.405412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.330 [2024-10-11 22:59:04.405425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.330 [2024-10-11 22:59:04.405453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.330 qpair failed and we were unable to recover it. 00:36:01.330 [2024-10-11 22:59:04.415301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.330 [2024-10-11 22:59:04.415394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.330 [2024-10-11 22:59:04.415418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.330 [2024-10-11 22:59:04.415433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.330 [2024-10-11 22:59:04.415445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.330 [2024-10-11 22:59:04.415473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.330 qpair failed and we were unable to recover it. 00:36:01.330 [2024-10-11 22:59:04.425409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.330 [2024-10-11 22:59:04.425499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.330 [2024-10-11 22:59:04.425524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.330 [2024-10-11 22:59:04.425538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.330 [2024-10-11 22:59:04.425560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.330 [2024-10-11 22:59:04.425601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.330 qpair failed and we were unable to recover it. 00:36:01.330 [2024-10-11 22:59:04.435374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.435459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.435484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.435499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.435511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.435540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.445355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.445445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.445470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.445486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.445500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.445529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.455407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.455495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.455519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.455534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.455546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.455587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.465423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.465517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.465542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.465567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.465581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.465609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.475481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.475579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.475611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.475631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.475644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.475674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.485486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.485578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.485604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.485618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.485631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.485660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.495572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.495672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.495697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.495712] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.495724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.495753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.505537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.505636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.505661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.505675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.505687] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.505717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.515577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.515664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.515690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.515705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.515718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.515752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.525622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.525713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.525737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.525752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.525764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.525793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.535682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.535778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.535802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.535817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.535830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.535858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.545754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.545842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.545868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.545883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.545895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.545924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.555705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.555796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.555821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.555835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.555848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.555876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.565752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.565880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.565910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.565925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.565938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.331 [2024-10-11 22:59:04.565965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.331 qpair failed and we were unable to recover it. 00:36:01.331 [2024-10-11 22:59:04.575834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.331 [2024-10-11 22:59:04.575927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.331 [2024-10-11 22:59:04.575952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.331 [2024-10-11 22:59:04.575966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.331 [2024-10-11 22:59:04.575979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.332 [2024-10-11 22:59:04.576007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.332 qpair failed and we were unable to recover it. 00:36:01.332 [2024-10-11 22:59:04.585810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.332 [2024-10-11 22:59:04.585909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.332 [2024-10-11 22:59:04.585934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.332 [2024-10-11 22:59:04.585949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.332 [2024-10-11 22:59:04.585961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.332 [2024-10-11 22:59:04.585990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.332 qpair failed and we were unable to recover it. 00:36:01.332 [2024-10-11 22:59:04.595835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.332 [2024-10-11 22:59:04.595945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.332 [2024-10-11 22:59:04.595972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.332 [2024-10-11 22:59:04.595990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.332 [2024-10-11 22:59:04.596003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.332 [2024-10-11 22:59:04.596032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.332 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.605877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.606002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.606027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.606042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.606055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.606089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.615899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.615995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.616020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.616035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.616047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.616075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.625957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.626042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.626067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.626082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.626095] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.626123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.635917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.636004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.636028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.636043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.636055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.636084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.645928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.646019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.646043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.646058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.646070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.646098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.655988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.656100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.656130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.656146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.656158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.656187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.666050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.666145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.666169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.666183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.666196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.666226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.676042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.676134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.676159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.676174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.676186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.676216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.686054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.686162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.686186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.686200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.686213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.686241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.696079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.696203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.696228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.696242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.696255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.696289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.706136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.706223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.706248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.706262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.706275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.706304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.716179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.716291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.716316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.716330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.716343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.716372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.591 [2024-10-11 22:59:04.726177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.591 [2024-10-11 22:59:04.726266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.591 [2024-10-11 22:59:04.726291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.591 [2024-10-11 22:59:04.726306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.591 [2024-10-11 22:59:04.726318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.591 [2024-10-11 22:59:04.726346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.591 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.736241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.736340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.736365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.736379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.736392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.736420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.746239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.746381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.746411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.746426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.746439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.746468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.756240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.756331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.756359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.756376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.756389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.756419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.766292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.766378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.766403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.766418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.766431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.766460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.776311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.776400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.776425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.776439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.776452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.776482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.786341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.786428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.786453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.786468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.786486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.786516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.796394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.796487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.796512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.796527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.796539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.796576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.806400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.806486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.806510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.806525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.806538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.806574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.816459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.816567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.816592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.816606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.816619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.816648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.826482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.826578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.826603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.826617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.826630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.826658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.836477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.836587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.836616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.836632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.836644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.836674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.846531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.846624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.846650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.846664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.846684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.846713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.592 [2024-10-11 22:59:04.856533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.592 [2024-10-11 22:59:04.856642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.592 [2024-10-11 22:59:04.856666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.592 [2024-10-11 22:59:04.856681] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.592 [2024-10-11 22:59:04.856693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.592 [2024-10-11 22:59:04.856723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.592 qpair failed and we were unable to recover it. 00:36:01.851 [2024-10-11 22:59:04.866574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.851 [2024-10-11 22:59:04.866673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.851 [2024-10-11 22:59:04.866697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.851 [2024-10-11 22:59:04.866712] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.851 [2024-10-11 22:59:04.866724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.851 [2024-10-11 22:59:04.866753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.851 qpair failed and we were unable to recover it. 00:36:01.851 [2024-10-11 22:59:04.876617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.851 [2024-10-11 22:59:04.876740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.851 [2024-10-11 22:59:04.876766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.851 [2024-10-11 22:59:04.876780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.851 [2024-10-11 22:59:04.876798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.851 [2024-10-11 22:59:04.876828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.851 qpair failed and we were unable to recover it. 00:36:01.851 [2024-10-11 22:59:04.886654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.851 [2024-10-11 22:59:04.886760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.851 [2024-10-11 22:59:04.886785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.851 [2024-10-11 22:59:04.886799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.851 [2024-10-11 22:59:04.886812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.851 [2024-10-11 22:59:04.886840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.851 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:04.896680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:04.896779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:04.896804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:04.896818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:04.896831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:04.896860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:04.906674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:04.906762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:04.906788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:04.906802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:04.906814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:04.906843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:04.916722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:04.916813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:04.916841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:04.916857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:04.916870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:04.916900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:04.926745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:04.926864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:04.926890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:04.926904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:04.926917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:04.926946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:04.936797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:04.936886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:04.936912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:04.936926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:04.936939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:04.936967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:04.946825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:04.946917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:04.946942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:04.946957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:04.946969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:04.946997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:04.956828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:04.956914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:04.956939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:04.956954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:04.956967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:04.956995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:04.966839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:04.966924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:04.966949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:04.966963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:04.966981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:04.967011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:04.976902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:04.976991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:04.977019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:04.977036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:04.977050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:04.977079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:04.986903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:04.987001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:04.987026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:04.987041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:04.987054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:04.987082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:04.996943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:04.997030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:04.997055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:04.997069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:04.997081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:04.997110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:05.006947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:05.007028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:05.007053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:05.007068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:05.007081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:05.007109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:05.016995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:05.017091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:05.017117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:05.017132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:05.017144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:05.017175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:05.027006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:05.027091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:05.027116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.852 [2024-10-11 22:59:05.027130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.852 [2024-10-11 22:59:05.027143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.852 [2024-10-11 22:59:05.027171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.852 qpair failed and we were unable to recover it. 00:36:01.852 [2024-10-11 22:59:05.037160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.852 [2024-10-11 22:59:05.037283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.852 [2024-10-11 22:59:05.037308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.853 [2024-10-11 22:59:05.037323] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.853 [2024-10-11 22:59:05.037336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.853 [2024-10-11 22:59:05.037365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.853 qpair failed and we were unable to recover it. 00:36:01.853 [2024-10-11 22:59:05.047062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.853 [2024-10-11 22:59:05.047143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.853 [2024-10-11 22:59:05.047167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.853 [2024-10-11 22:59:05.047182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.853 [2024-10-11 22:59:05.047195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.853 [2024-10-11 22:59:05.047223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.853 qpair failed and we were unable to recover it. 00:36:01.853 [2024-10-11 22:59:05.057114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.853 [2024-10-11 22:59:05.057200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.853 [2024-10-11 22:59:05.057224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.853 [2024-10-11 22:59:05.057238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.853 [2024-10-11 22:59:05.057255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.853 [2024-10-11 22:59:05.057284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.853 qpair failed and we were unable to recover it. 00:36:01.853 [2024-10-11 22:59:05.067255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.853 [2024-10-11 22:59:05.067396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.853 [2024-10-11 22:59:05.067423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.853 [2024-10-11 22:59:05.067439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.853 [2024-10-11 22:59:05.067452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.853 [2024-10-11 22:59:05.067480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.853 qpair failed and we were unable to recover it. 00:36:01.853 [2024-10-11 22:59:05.077176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.853 [2024-10-11 22:59:05.077265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.853 [2024-10-11 22:59:05.077293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.853 [2024-10-11 22:59:05.077310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.853 [2024-10-11 22:59:05.077323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.853 [2024-10-11 22:59:05.077353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.853 qpair failed and we were unable to recover it. 00:36:01.853 [2024-10-11 22:59:05.087193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.853 [2024-10-11 22:59:05.087301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.853 [2024-10-11 22:59:05.087327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.853 [2024-10-11 22:59:05.087342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.853 [2024-10-11 22:59:05.087354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.853 [2024-10-11 22:59:05.087383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.853 qpair failed and we were unable to recover it. 00:36:01.853 [2024-10-11 22:59:05.097228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.853 [2024-10-11 22:59:05.097352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.853 [2024-10-11 22:59:05.097378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.853 [2024-10-11 22:59:05.097393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.853 [2024-10-11 22:59:05.097405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.853 [2024-10-11 22:59:05.097435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.853 qpair failed and we were unable to recover it. 00:36:01.853 [2024-10-11 22:59:05.107290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.853 [2024-10-11 22:59:05.107403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.853 [2024-10-11 22:59:05.107428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.853 [2024-10-11 22:59:05.107443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.853 [2024-10-11 22:59:05.107455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.853 [2024-10-11 22:59:05.107483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.853 qpair failed and we were unable to recover it. 00:36:01.853 [2024-10-11 22:59:05.117320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.853 [2024-10-11 22:59:05.117404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.853 [2024-10-11 22:59:05.117428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.853 [2024-10-11 22:59:05.117443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.853 [2024-10-11 22:59:05.117456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:01.853 [2024-10-11 22:59:05.117484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.853 qpair failed and we were unable to recover it. 00:36:02.112 [2024-10-11 22:59:05.127325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.112 [2024-10-11 22:59:05.127409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.112 [2024-10-11 22:59:05.127433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.112 [2024-10-11 22:59:05.127447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.112 [2024-10-11 22:59:05.127460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.112 [2024-10-11 22:59:05.127488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-10-11 22:59:05.137398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.112 [2024-10-11 22:59:05.137490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.112 [2024-10-11 22:59:05.137514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.112 [2024-10-11 22:59:05.137528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.112 [2024-10-11 22:59:05.137540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.112 [2024-10-11 22:59:05.137579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-10-11 22:59:05.147368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.112 [2024-10-11 22:59:05.147454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.112 [2024-10-11 22:59:05.147480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.112 [2024-10-11 22:59:05.147499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.112 [2024-10-11 22:59:05.147512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.112 [2024-10-11 22:59:05.147540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-10-11 22:59:05.157514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.112 [2024-10-11 22:59:05.157608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.112 [2024-10-11 22:59:05.157633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.112 [2024-10-11 22:59:05.157647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.112 [2024-10-11 22:59:05.157660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.112 [2024-10-11 22:59:05.157688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-10-11 22:59:05.167517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.112 [2024-10-11 22:59:05.167640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.112 [2024-10-11 22:59:05.167667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.112 [2024-10-11 22:59:05.167681] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.112 [2024-10-11 22:59:05.167694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.112 [2024-10-11 22:59:05.167723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-10-11 22:59:05.177482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.112 [2024-10-11 22:59:05.177591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.112 [2024-10-11 22:59:05.177616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.112 [2024-10-11 22:59:05.177631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.112 [2024-10-11 22:59:05.177643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.112 [2024-10-11 22:59:05.177671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-10-11 22:59:05.187486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.112 [2024-10-11 22:59:05.187602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.112 [2024-10-11 22:59:05.187629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.112 [2024-10-11 22:59:05.187643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.112 [2024-10-11 22:59:05.187655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.112 [2024-10-11 22:59:05.187683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-10-11 22:59:05.197545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.112 [2024-10-11 22:59:05.197647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.112 [2024-10-11 22:59:05.197672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.112 [2024-10-11 22:59:05.197686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.112 [2024-10-11 22:59:05.197699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.112 [2024-10-11 22:59:05.197728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-10-11 22:59:05.207545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.112 [2024-10-11 22:59:05.207639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.112 [2024-10-11 22:59:05.207664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.112 [2024-10-11 22:59:05.207678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.112 [2024-10-11 22:59:05.207690] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.112 [2024-10-11 22:59:05.207720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-10-11 22:59:05.217573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.112 [2024-10-11 22:59:05.217665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.112 [2024-10-11 22:59:05.217689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.112 [2024-10-11 22:59:05.217703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.112 [2024-10-11 22:59:05.217716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.112 [2024-10-11 22:59:05.217744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.112 qpair failed and we were unable to recover it. 00:36:02.112 [2024-10-11 22:59:05.227623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.227711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.227735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.227749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.227762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.227791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.237605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.237692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.237717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.237740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.237753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.237782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.247639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.247724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.247749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.247763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.247775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.247804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.257689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.257779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.257804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.257818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.257831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.257860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.267708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.267787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.267812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.267826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.267839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.267867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.277742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.277849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.277873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.277888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.277902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.277931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.287845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.287950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.287975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.287989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.288001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.288030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.297824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.297960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.297985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.297999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.298011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.298039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.307873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.307974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.307999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.308012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.308025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.308053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.317933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.318017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.318042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.318056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.318068] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.318097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.327902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.328025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.328049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.328080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.328094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.328123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.337915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.338010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.338036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.338050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.338063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.338092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.347926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.348013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.348037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.348051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.348064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.348092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.357941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.358033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.358057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.358071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.113 [2024-10-11 22:59:05.358084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.113 [2024-10-11 22:59:05.358113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.113 qpair failed and we were unable to recover it. 00:36:02.113 [2024-10-11 22:59:05.367962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.113 [2024-10-11 22:59:05.368047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.113 [2024-10-11 22:59:05.368071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.113 [2024-10-11 22:59:05.368085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.114 [2024-10-11 22:59:05.368097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.114 [2024-10-11 22:59:05.368126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.114 [2024-10-11 22:59:05.378136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.114 [2024-10-11 22:59:05.378267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.114 [2024-10-11 22:59:05.378293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.114 [2024-10-11 22:59:05.378308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.114 [2024-10-11 22:59:05.378320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.114 [2024-10-11 22:59:05.378348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.114 qpair failed and we were unable to recover it. 00:36:02.372 [2024-10-11 22:59:05.388076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.372 [2024-10-11 22:59:05.388171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.372 [2024-10-11 22:59:05.388197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.372 [2024-10-11 22:59:05.388212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.372 [2024-10-11 22:59:05.388224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.372 [2024-10-11 22:59:05.388254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.372 qpair failed and we were unable to recover it. 00:36:02.372 [2024-10-11 22:59:05.398100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.372 [2024-10-11 22:59:05.398218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.372 [2024-10-11 22:59:05.398245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.372 [2024-10-11 22:59:05.398260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.372 [2024-10-11 22:59:05.398273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.372 [2024-10-11 22:59:05.398302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.372 qpair failed and we were unable to recover it. 00:36:02.372 [2024-10-11 22:59:05.408183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.372 [2024-10-11 22:59:05.408274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.372 [2024-10-11 22:59:05.408299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.372 [2024-10-11 22:59:05.408313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.372 [2024-10-11 22:59:05.408326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.372 [2024-10-11 22:59:05.408355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.372 qpair failed and we were unable to recover it. 00:36:02.372 [2024-10-11 22:59:05.418197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.372 [2024-10-11 22:59:05.418297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.372 [2024-10-11 22:59:05.418322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.372 [2024-10-11 22:59:05.418342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.372 [2024-10-11 22:59:05.418356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.372 [2024-10-11 22:59:05.418385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.372 qpair failed and we were unable to recover it. 00:36:02.372 [2024-10-11 22:59:05.428159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.372 [2024-10-11 22:59:05.428252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.372 [2024-10-11 22:59:05.428276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.372 [2024-10-11 22:59:05.428290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.372 [2024-10-11 22:59:05.428303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.372 [2024-10-11 22:59:05.428332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.372 qpair failed and we were unable to recover it. 00:36:02.372 [2024-10-11 22:59:05.438190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.372 [2024-10-11 22:59:05.438285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.372 [2024-10-11 22:59:05.438309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.372 [2024-10-11 22:59:05.438324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.372 [2024-10-11 22:59:05.438336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.372 [2024-10-11 22:59:05.438365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.372 qpair failed and we were unable to recover it. 00:36:02.372 [2024-10-11 22:59:05.448172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.372 [2024-10-11 22:59:05.448270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.372 [2024-10-11 22:59:05.448296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.372 [2024-10-11 22:59:05.448311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.372 [2024-10-11 22:59:05.448323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.372 [2024-10-11 22:59:05.448352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.372 qpair failed and we were unable to recover it. 00:36:02.373 [2024-10-11 22:59:05.458222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.373 [2024-10-11 22:59:05.458315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.373 [2024-10-11 22:59:05.458339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.373 [2024-10-11 22:59:05.458353] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.373 [2024-10-11 22:59:05.458366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.373 [2024-10-11 22:59:05.458394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.373 qpair failed and we were unable to recover it. 00:36:02.373 [2024-10-11 22:59:05.468241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.373 [2024-10-11 22:59:05.468348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.373 [2024-10-11 22:59:05.468374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.373 [2024-10-11 22:59:05.468389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.373 [2024-10-11 22:59:05.468401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.373 [2024-10-11 22:59:05.468431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.373 qpair failed and we were unable to recover it. 00:36:02.373 [2024-10-11 22:59:05.478271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.373 [2024-10-11 22:59:05.478359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.373 [2024-10-11 22:59:05.478384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.373 [2024-10-11 22:59:05.478399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.373 [2024-10-11 22:59:05.478412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.373 [2024-10-11 22:59:05.478440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.373 qpair failed and we were unable to recover it. 00:36:02.373 [2024-10-11 22:59:05.488296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.373 [2024-10-11 22:59:05.488378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.373 [2024-10-11 22:59:05.488403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.373 [2024-10-11 22:59:05.488417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.373 [2024-10-11 22:59:05.488430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.373 [2024-10-11 22:59:05.488458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.373 qpair failed and we were unable to recover it. 00:36:02.373 [2024-10-11 22:59:05.498327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.373 [2024-10-11 22:59:05.498420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.373 [2024-10-11 22:59:05.498444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.373 [2024-10-11 22:59:05.498458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.373 [2024-10-11 22:59:05.498470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.373 [2024-10-11 22:59:05.498498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.373 qpair failed and we were unable to recover it. 00:36:02.373 [2024-10-11 22:59:05.508378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.373 [2024-10-11 22:59:05.508489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.373 [2024-10-11 22:59:05.508521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.373 [2024-10-11 22:59:05.508541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.373 [2024-10-11 22:59:05.508563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x222c340 00:36:02.373 [2024-10-11 22:59:05.508594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.373 qpair failed and we were unable to recover it. 00:36:02.373 [2024-10-11 22:59:05.518388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.373 [2024-10-11 22:59:05.518475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.373 [2024-10-11 22:59:05.518507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.373 [2024-10-11 22:59:05.518522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.373 [2024-10-11 22:59:05.518544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff3d8000b90 00:36:02.373 [2024-10-11 22:59:05.518584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:02.373 qpair failed and we were unable to recover it. 00:36:02.373 [2024-10-11 22:59:05.528424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.373 [2024-10-11 22:59:05.528517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.373 [2024-10-11 22:59:05.528547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.373 [2024-10-11 22:59:05.528574] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.373 [2024-10-11 22:59:05.528588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff3d8000b90 00:36:02.373 [2024-10-11 22:59:05.528617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:02.373 qpair failed and we were unable to recover it. 00:36:02.373 [2024-10-11 22:59:05.528716] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:36:02.373 A controller has encountered a failure and is being reset. 00:36:02.373 Controller properly reset. 00:36:02.373 Initializing NVMe Controllers 00:36:02.373 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:02.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:02.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:02.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:02.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:02.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:02.373 Initialization complete. Launching workers. 00:36:02.373 Starting thread on core 1 00:36:02.373 Starting thread on core 2 00:36:02.373 Starting thread on core 3 00:36:02.373 Starting thread on core 0 00:36:02.373 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:02.373 00:36:02.373 real 0m10.739s 00:36:02.373 user 0m19.308s 00:36:02.373 sys 0m5.314s 00:36:02.373 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:02.373 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.373 ************************************ 00:36:02.373 END TEST nvmf_target_disconnect_tc2 00:36:02.373 ************************************ 00:36:02.373 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:02.373 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:02.373 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:02.373 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:02.373 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:02.373 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:02.373 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:02.373 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:02.373 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:02.373 rmmod nvme_tcp 00:36:02.631 rmmod nvme_fabrics 00:36:02.631 rmmod nvme_keyring 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 401116 ']' 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 401116 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 401116 ']' 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 401116 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 401116 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 401116' 00:36:02.631 killing process with pid 401116 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 401116 00:36:02.631 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 401116 00:36:02.891 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:02.891 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:02.891 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:02.891 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:02.891 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:36:02.891 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:36:02.891 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:02.891 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:02.891 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:02.891 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.891 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:02.891 22:59:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.795 22:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:04.795 00:36:04.795 real 0m15.850s 00:36:04.795 user 0m45.566s 00:36:04.795 sys 0m7.458s 00:36:04.795 22:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:04.795 22:59:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:04.795 ************************************ 00:36:04.795 END TEST nvmf_target_disconnect 00:36:04.795 ************************************ 00:36:04.795 22:59:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:04.795 00:36:04.795 real 6m43.365s 00:36:04.795 user 17m17.858s 00:36:04.795 sys 1m26.625s 00:36:04.796 22:59:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:04.796 22:59:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.796 ************************************ 00:36:04.796 END TEST nvmf_host 00:36:04.796 ************************************ 00:36:04.796 22:59:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:04.796 22:59:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:04.796 22:59:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:04.796 22:59:08 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:04.796 22:59:08 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:04.796 22:59:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.796 ************************************ 00:36:04.796 START TEST nvmf_target_core_interrupt_mode 00:36:04.796 ************************************ 00:36:04.796 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:05.055 * Looking for test storage... 00:36:05.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:05.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.055 --rc genhtml_branch_coverage=1 00:36:05.055 --rc genhtml_function_coverage=1 00:36:05.055 --rc genhtml_legend=1 00:36:05.055 --rc geninfo_all_blocks=1 00:36:05.055 --rc geninfo_unexecuted_blocks=1 00:36:05.055 00:36:05.055 ' 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:05.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.055 --rc genhtml_branch_coverage=1 00:36:05.055 --rc genhtml_function_coverage=1 00:36:05.055 --rc genhtml_legend=1 00:36:05.055 --rc geninfo_all_blocks=1 00:36:05.055 --rc geninfo_unexecuted_blocks=1 00:36:05.055 00:36:05.055 ' 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:05.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.055 --rc genhtml_branch_coverage=1 00:36:05.055 --rc genhtml_function_coverage=1 00:36:05.055 --rc genhtml_legend=1 00:36:05.055 --rc geninfo_all_blocks=1 00:36:05.055 --rc geninfo_unexecuted_blocks=1 00:36:05.055 00:36:05.055 ' 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:05.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.055 --rc genhtml_branch_coverage=1 00:36:05.055 --rc genhtml_function_coverage=1 00:36:05.055 --rc genhtml_legend=1 00:36:05.055 --rc geninfo_all_blocks=1 00:36:05.055 --rc geninfo_unexecuted_blocks=1 00:36:05.055 00:36:05.055 ' 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:05.055 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:05.056 ************************************ 00:36:05.056 START TEST nvmf_abort 00:36:05.056 ************************************ 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:05.056 * Looking for test storage... 00:36:05.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:36:05.056 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:05.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.315 --rc genhtml_branch_coverage=1 00:36:05.315 --rc genhtml_function_coverage=1 00:36:05.315 --rc genhtml_legend=1 00:36:05.315 --rc geninfo_all_blocks=1 00:36:05.315 --rc geninfo_unexecuted_blocks=1 00:36:05.315 00:36:05.315 ' 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:05.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.315 --rc genhtml_branch_coverage=1 00:36:05.315 --rc genhtml_function_coverage=1 00:36:05.315 --rc genhtml_legend=1 00:36:05.315 --rc geninfo_all_blocks=1 00:36:05.315 --rc geninfo_unexecuted_blocks=1 00:36:05.315 00:36:05.315 ' 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:05.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.315 --rc genhtml_branch_coverage=1 00:36:05.315 --rc genhtml_function_coverage=1 00:36:05.315 --rc genhtml_legend=1 00:36:05.315 --rc geninfo_all_blocks=1 00:36:05.315 --rc geninfo_unexecuted_blocks=1 00:36:05.315 00:36:05.315 ' 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:05.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.315 --rc genhtml_branch_coverage=1 00:36:05.315 --rc genhtml_function_coverage=1 00:36:05.315 --rc genhtml_legend=1 00:36:05.315 --rc geninfo_all_blocks=1 00:36:05.315 --rc geninfo_unexecuted_blocks=1 00:36:05.315 00:36:05.315 ' 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:05.315 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:05.316 22:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.848 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:07.848 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:07.848 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:07.848 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:07.848 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:07.848 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:07.849 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:07.849 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:07.849 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:07.849 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:07.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:07.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:36:07.849 00:36:07.849 --- 10.0.0.2 ping statistics --- 00:36:07.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:07.849 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:36:07.849 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:07.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:07.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:36:07.849 00:36:07.850 --- 10.0.0.1 ping statistics --- 00:36:07.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:07.850 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=403923 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 403923 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 403923 ']' 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:07.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:07.850 22:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.850 [2024-10-11 22:59:10.788058] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:07.850 [2024-10-11 22:59:10.789173] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:36:07.850 [2024-10-11 22:59:10.789229] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:07.850 [2024-10-11 22:59:10.857366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:07.850 [2024-10-11 22:59:10.906115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:07.850 [2024-10-11 22:59:10.906166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:07.850 [2024-10-11 22:59:10.906180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:07.850 [2024-10-11 22:59:10.906191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:07.850 [2024-10-11 22:59:10.906200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:07.850 [2024-10-11 22:59:10.907679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:07.850 [2024-10-11 22:59:10.907730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:07.850 [2024-10-11 22:59:10.907734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:07.850 [2024-10-11 22:59:10.995428] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:07.850 [2024-10-11 22:59:10.995681] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:07.850 [2024-10-11 22:59:10.995696] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:07.850 [2024-10-11 22:59:10.995962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.850 [2024-10-11 22:59:11.052440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.850 Malloc0 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.850 Delay0 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.850 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.108 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.108 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:08.108 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.108 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.108 [2024-10-11 22:59:11.124626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:08.108 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.108 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:08.108 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.108 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.108 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.108 22:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:08.108 [2024-10-11 22:59:11.266639] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:10.638 Initializing NVMe Controllers 00:36:10.638 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:10.638 controller IO queue size 128 less than required 00:36:10.638 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:10.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:10.638 Initialization complete. Launching workers. 00:36:10.638 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28775 00:36:10.638 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28832, failed to submit 66 00:36:10.638 success 28775, unsuccessful 57, failed 0 00:36:10.638 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:10.638 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.638 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:10.638 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.638 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:10.638 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:10.638 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:10.638 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:10.638 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:10.638 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:10.639 rmmod nvme_tcp 00:36:10.639 rmmod nvme_fabrics 00:36:10.639 rmmod nvme_keyring 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 403923 ']' 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 403923 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 403923 ']' 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 403923 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 403923 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 403923' 00:36:10.639 killing process with pid 403923 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 403923 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 403923 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:10.639 22:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.546 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:12.546 00:36:12.546 real 0m7.564s 00:36:12.546 user 0m9.682s 00:36:12.546 sys 0m3.121s 00:36:12.546 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:12.546 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:12.546 ************************************ 00:36:12.546 END TEST nvmf_abort 00:36:12.546 ************************************ 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:12.805 ************************************ 00:36:12.805 START TEST nvmf_ns_hotplug_stress 00:36:12.805 ************************************ 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:12.805 * Looking for test storage... 00:36:12.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:12.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.805 --rc genhtml_branch_coverage=1 00:36:12.805 --rc genhtml_function_coverage=1 00:36:12.805 --rc genhtml_legend=1 00:36:12.805 --rc geninfo_all_blocks=1 00:36:12.805 --rc geninfo_unexecuted_blocks=1 00:36:12.805 00:36:12.805 ' 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:12.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.805 --rc genhtml_branch_coverage=1 00:36:12.805 --rc genhtml_function_coverage=1 00:36:12.805 --rc genhtml_legend=1 00:36:12.805 --rc geninfo_all_blocks=1 00:36:12.805 --rc geninfo_unexecuted_blocks=1 00:36:12.805 00:36:12.805 ' 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:12.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.805 --rc genhtml_branch_coverage=1 00:36:12.805 --rc genhtml_function_coverage=1 00:36:12.805 --rc genhtml_legend=1 00:36:12.805 --rc geninfo_all_blocks=1 00:36:12.805 --rc geninfo_unexecuted_blocks=1 00:36:12.805 00:36:12.805 ' 00:36:12.805 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:12.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.805 --rc genhtml_branch_coverage=1 00:36:12.805 --rc genhtml_function_coverage=1 00:36:12.805 --rc genhtml_legend=1 00:36:12.806 --rc geninfo_all_blocks=1 00:36:12.806 --rc geninfo_unexecuted_blocks=1 00:36:12.806 00:36:12.806 ' 00:36:12.806 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:12.806 22:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:12.806 22:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:15.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:15.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:15.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:15.337 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:15.337 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:15.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:15.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:36:15.338 00:36:15.338 --- 10.0.0.2 ping statistics --- 00:36:15.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.338 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:15.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:36:15.338 00:36:15.338 --- 10.0.0.1 ping statistics --- 00:36:15.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.338 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=406261 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 406261 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 406261 ']' 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:15.338 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:15.338 [2024-10-11 22:59:18.398011] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:15.338 [2024-10-11 22:59:18.399042] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:36:15.338 [2024-10-11 22:59:18.399089] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:15.338 [2024-10-11 22:59:18.461365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:15.338 [2024-10-11 22:59:18.507667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:15.338 [2024-10-11 22:59:18.507722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:15.338 [2024-10-11 22:59:18.507746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:15.338 [2024-10-11 22:59:18.507757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:15.338 [2024-10-11 22:59:18.507766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:15.338 [2024-10-11 22:59:18.509255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:15.338 [2024-10-11 22:59:18.509313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:15.338 [2024-10-11 22:59:18.509316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:15.338 [2024-10-11 22:59:18.589878] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:15.338 [2024-10-11 22:59:18.590082] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:15.338 [2024-10-11 22:59:18.590087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:15.338 [2024-10-11 22:59:18.590360] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:15.596 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:15.596 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:36:15.596 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:15.596 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:15.596 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:15.596 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.596 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:15.596 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:15.854 [2024-10-11 22:59:18.898049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.854 22:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:16.112 22:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:16.370 [2024-10-11 22:59:19.458322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:16.370 22:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:16.628 22:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:16.886 Malloc0 00:36:16.886 22:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:17.144 Delay0 00:36:17.144 22:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:17.402 22:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:17.660 NULL1 00:36:17.660 22:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:18.225 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=406651 00:36:18.225 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:18.226 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:18.226 22:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.597 Read completed with error (sct=0, sc=11) 00:36:19.597 22:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:19.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:19.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:19.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:19.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:19.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:19.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:19.597 22:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:19.597 22:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:19.854 true 00:36:19.854 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:19.854 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:20.787 22:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.045 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:21.045 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:21.302 true 00:36:21.302 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:21.302 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.559 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.817 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:21.817 22:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:22.075 true 00:36:22.075 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:22.075 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.006 22:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.006 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:23.006 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:23.263 true 00:36:23.263 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:23.263 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.521 22:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.779 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:23.779 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:24.036 true 00:36:24.036 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:24.036 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.294 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.551 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:24.551 22:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:25.117 true 00:36:25.117 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:25.117 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:26.049 22:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.049 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:26.049 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:26.307 true 00:36:26.307 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:26.307 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:26.564 22:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.822 22:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:26.822 22:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:27.079 true 00:36:27.337 22:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:27.337 22:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.595 22:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.852 22:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:27.852 22:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:28.110 true 00:36:28.110 22:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:28.110 22:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.043 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.300 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:29.300 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:29.558 true 00:36:29.558 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:29.558 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.815 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.073 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:30.073 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:30.331 true 00:36:30.331 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:30.331 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:31.264 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:31.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:31.521 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:31.521 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:31.779 true 00:36:31.779 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:31.779 22:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.036 22:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:32.294 22:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:32.294 22:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:32.551 true 00:36:32.551 22:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:32.551 22:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.481 22:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.738 22:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:33.738 22:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:33.996 true 00:36:33.996 22:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:33.996 22:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.253 22:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.510 22:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:34.510 22:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:34.768 true 00:36:34.768 22:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:34.768 22:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.701 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.701 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:35.701 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:35.958 true 00:36:35.958 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:35.958 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.216 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.472 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:36.472 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:36.729 true 00:36:36.729 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:36.729 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.294 22:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.294 22:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:37.294 22:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:37.551 true 00:36:37.809 22:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:37.809 22:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.740 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.996 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:38.996 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:39.252 true 00:36:39.252 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:39.252 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.509 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.766 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:39.766 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:40.023 true 00:36:40.023 22:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:40.023 22:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.279 22:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.537 22:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:40.537 22:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:40.795 true 00:36:40.795 22:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:40.795 22:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.727 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.984 22:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:41.984 22:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:42.242 true 00:36:42.242 22:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:42.242 22:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.499 22:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.757 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:42.757 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:43.014 true 00:36:43.271 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:43.271 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.529 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.786 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:43.786 22:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:44.043 true 00:36:44.043 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:44.043 22:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.975 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.233 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:45.233 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:45.490 true 00:36:45.490 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:45.490 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.748 22:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.005 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:46.005 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:46.262 true 00:36:46.262 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:46.262 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.520 22:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.779 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:46.779 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:47.036 true 00:36:47.037 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:47.037 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.968 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.226 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:48.226 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:48.484 Initializing NVMe Controllers 00:36:48.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:48.484 Controller IO queue size 128, less than required. 00:36:48.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:48.484 Controller IO queue size 128, less than required. 00:36:48.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:48.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:48.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:48.484 Initialization complete. Launching workers. 00:36:48.484 ======================================================== 00:36:48.484 Latency(us) 00:36:48.484 Device Information : IOPS MiB/s Average min max 00:36:48.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 527.83 0.26 107419.63 3354.86 1085440.95 00:36:48.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9272.25 4.53 13806.04 1584.48 366158.76 00:36:48.484 ======================================================== 00:36:48.484 Total : 9800.08 4.79 18848.07 1584.48 1085440.95 00:36:48.484 00:36:48.484 true 00:36:48.741 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406651 00:36:48.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (406651) - No such process 00:36:48.741 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 406651 00:36:48.741 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.999 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:49.257 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:49.257 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:49.257 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:49.257 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.257 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:49.515 null0 00:36:49.515 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.515 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.515 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:49.772 null1 00:36:49.772 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.773 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.773 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:50.031 null2 00:36:50.031 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:50.031 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:50.031 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:50.288 null3 00:36:50.288 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:50.289 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:50.289 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:50.546 null4 00:36:50.546 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:50.546 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:50.546 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:50.804 null5 00:36:50.804 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:50.804 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:50.804 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:51.063 null6 00:36:51.063 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:51.063 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:51.063 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:51.321 null7 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.321 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 410553 410554 410556 410558 410560 410562 410564 410566 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.322 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:51.580 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:51.580 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:51.580 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:51.580 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:51.580 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.580 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:51.580 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:51.580 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.838 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:52.096 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:52.096 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:52.096 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:52.096 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:52.354 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.354 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:52.354 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:52.354 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:52.612 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.612 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.612 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:52.612 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.612 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.612 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:52.612 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.612 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.612 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:52.612 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.613 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:52.876 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:52.876 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:52.876 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:52.876 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:52.876 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.876 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:52.876 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:52.876 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.136 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:53.394 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:53.394 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:53.394 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:53.394 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:53.394 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:53.394 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.394 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:53.394 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.652 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:53.653 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.653 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.653 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:53.653 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.653 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.653 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:53.911 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:53.911 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:53.911 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:53.911 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.911 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:53.911 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:53.911 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:53.911 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:54.169 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.169 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.169 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:54.169 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.169 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.169 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.427 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:54.686 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:54.686 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:54.686 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:54.686 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.686 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:54.686 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:54.686 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:54.686 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:54.944 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:55.202 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:55.202 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:55.202 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:55.202 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.202 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:55.202 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:55.202 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:55.202 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.460 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:55.718 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:55.718 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:55.718 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:55.718 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.718 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:55.718 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:55.718 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:55.718 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:55.976 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.976 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.976 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:55.976 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.976 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.976 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:55.976 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.976 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.976 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:55.977 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.977 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.977 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:55.977 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.977 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.977 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:55.977 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.977 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.977 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:56.235 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.235 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.235 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:56.235 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.235 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.235 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:56.493 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:56.493 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:56.493 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:56.493 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:56.493 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:56.493 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:56.493 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.493 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.751 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:56.752 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:56.752 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.752 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.752 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:56.752 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.752 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.752 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:57.009 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:57.010 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:57.010 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:57.010 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.010 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:57.010 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:57.010 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:57.010 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.284 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:57.285 rmmod nvme_tcp 00:36:57.285 rmmod nvme_fabrics 00:36:57.285 rmmod nvme_keyring 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 406261 ']' 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 406261 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 406261 ']' 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 406261 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:36:57.285 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 406261 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 406261' 00:36:57.551 killing process with pid 406261 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 406261 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 406261 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:57.551 23:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:00.114 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:00.114 00:37:00.114 real 0m46.982s 00:37:00.114 user 3m16.468s 00:37:00.114 sys 0m21.972s 00:37:00.114 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:00.114 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:00.114 ************************************ 00:37:00.114 END TEST nvmf_ns_hotplug_stress 00:37:00.114 ************************************ 00:37:00.114 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:00.114 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:00.114 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:00.114 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:00.114 ************************************ 00:37:00.114 START TEST nvmf_delete_subsystem 00:37:00.114 ************************************ 00:37:00.115 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:00.115 * Looking for test storage... 00:37:00.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:00.115 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:00.115 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:37:00.115 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:00.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.115 --rc genhtml_branch_coverage=1 00:37:00.115 --rc genhtml_function_coverage=1 00:37:00.115 --rc genhtml_legend=1 00:37:00.115 --rc geninfo_all_blocks=1 00:37:00.115 --rc geninfo_unexecuted_blocks=1 00:37:00.115 00:37:00.115 ' 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:00.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.115 --rc genhtml_branch_coverage=1 00:37:00.115 --rc genhtml_function_coverage=1 00:37:00.115 --rc genhtml_legend=1 00:37:00.115 --rc geninfo_all_blocks=1 00:37:00.115 --rc geninfo_unexecuted_blocks=1 00:37:00.115 00:37:00.115 ' 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:00.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.115 --rc genhtml_branch_coverage=1 00:37:00.115 --rc genhtml_function_coverage=1 00:37:00.115 --rc genhtml_legend=1 00:37:00.115 --rc geninfo_all_blocks=1 00:37:00.115 --rc geninfo_unexecuted_blocks=1 00:37:00.115 00:37:00.115 ' 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:00.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.115 --rc genhtml_branch_coverage=1 00:37:00.115 --rc genhtml_function_coverage=1 00:37:00.115 --rc genhtml_legend=1 00:37:00.115 --rc geninfo_all_blocks=1 00:37:00.115 --rc geninfo_unexecuted_blocks=1 00:37:00.115 00:37:00.115 ' 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:00.115 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:00.116 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:02.018 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:02.018 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:02.018 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:02.019 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:02.019 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:02.019 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:02.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:02.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:37:02.278 00:37:02.278 --- 10.0.0.2 ping statistics --- 00:37:02.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.278 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:02.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:02.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:37:02.278 00:37:02.278 --- 10.0.0.1 ping statistics --- 00:37:02.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.278 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=413545 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 413545 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 413545 ']' 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:02.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:02.278 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.278 [2024-10-11 23:00:05.389852] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:02.278 [2024-10-11 23:00:05.390944] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:37:02.278 [2024-10-11 23:00:05.391023] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:02.278 [2024-10-11 23:00:05.456227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:02.278 [2024-10-11 23:00:05.499597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:02.278 [2024-10-11 23:00:05.499658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:02.278 [2024-10-11 23:00:05.499682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:02.278 [2024-10-11 23:00:05.499692] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:02.278 [2024-10-11 23:00:05.499702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:02.278 [2024-10-11 23:00:05.501097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.278 [2024-10-11 23:00:05.501102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:02.537 [2024-10-11 23:00:05.585899] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:02.537 [2024-10-11 23:00:05.585938] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:02.537 [2024-10-11 23:00:05.586188] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.537 [2024-10-11 23:00:05.645738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.537 [2024-10-11 23:00:05.661938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.537 NULL1 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.537 Delay0 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=413570 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:02.537 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:02.537 [2024-10-11 23:00:05.735524] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:04.432 23:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:04.432 23:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.432 23:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 [2024-10-11 23:00:07.814776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c1c00cfe0 is same with the state(6) to be set 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 starting I/O failed: -6 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.689 Write completed with error (sct=0, sc=8) 00:37:04.689 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 starting I/O failed: -6 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Write completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 starting I/O failed: -6 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Write completed with error (sct=0, sc=8) 00:37:04.690 Write completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 starting I/O failed: -6 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Write completed with error (sct=0, sc=8) 00:37:04.690 Write completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 starting I/O failed: -6 00:37:04.690 Write completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Write completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 starting I/O failed: -6 00:37:04.690 Write completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Write completed with error (sct=0, sc=8) 00:37:04.690 starting I/O failed: -6 00:37:04.690 Write completed with error (sct=0, sc=8) 00:37:04.690 Write completed with error (sct=0, sc=8) 00:37:04.690 Read completed with error (sct=0, sc=8) 00:37:04.690 Write completed with error (sct=0, sc=8) 00:37:04.690 starting I/O failed: -6 00:37:04.690 starting I/O failed: -6 00:37:04.690 starting I/O failed: -6 00:37:04.690 starting I/O failed: -6 00:37:04.690 starting I/O failed: -6 00:37:05.621 [2024-10-11 23:00:08.790741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2054d00 is same with the state(6) to be set 00:37:05.621 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 [2024-10-11 23:00:08.808873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c1c00d310 is same with the state(6) to be set 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 [2024-10-11 23:00:08.817459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20575c0 is same with the state(6) to be set 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 [2024-10-11 23:00:08.817883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2056ed0 is same with the state(6) to be set 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Write completed with error (sct=0, sc=8) 00:37:05.622 Read completed with error (sct=0, sc=8) 00:37:05.622 [2024-10-11 23:00:08.818132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20570b0 is same with the state(6) to be set 00:37:05.622 Initializing NVMe Controllers 00:37:05.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:05.622 Controller IO queue size 128, less than required. 00:37:05.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:05.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:05.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:05.622 Initialization complete. Launching workers. 00:37:05.622 ======================================================== 00:37:05.622 Latency(us) 00:37:05.622 Device Information : IOPS MiB/s Average min max 00:37:05.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 192.03 0.09 953228.75 791.83 1011849.96 00:37:05.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 145.39 0.07 917387.81 450.84 1011790.17 00:37:05.622 ======================================================== 00:37:05.622 Total : 337.43 0.16 937785.52 450.84 1011849.96 00:37:05.622 00:37:05.622 [2024-10-11 23:00:08.819048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2054d00 (9): Bad file descriptor 00:37:05.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:05.622 23:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.622 23:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:05.622 23:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 413570 00:37:05.622 23:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 413570 00:37:06.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (413570) - No such process 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 413570 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 413570 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 413570 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.188 [2024-10-11 23:00:09.341879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=414487 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 414487 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:06.188 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:06.188 [2024-10-11 23:00:09.393532] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:06.753 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:06.753 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 414487 00:37:06.753 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:07.318 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:07.318 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 414487 00:37:07.318 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:07.885 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:07.885 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 414487 00:37:07.885 23:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:08.142 23:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:08.142 23:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 414487 00:37:08.142 23:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:08.706 23:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:08.706 23:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 414487 00:37:08.706 23:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:09.271 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:09.272 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 414487 00:37:09.272 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:09.529 Initializing NVMe Controllers 00:37:09.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:09.529 Controller IO queue size 128, less than required. 00:37:09.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:09.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:09.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:09.529 Initialization complete. Launching workers. 00:37:09.529 ======================================================== 00:37:09.529 Latency(us) 00:37:09.529 Device Information : IOPS MiB/s Average min max 00:37:09.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005141.98 1000200.76 1042257.56 00:37:09.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006131.78 1000175.57 1043715.48 00:37:09.530 ======================================================== 00:37:09.530 Total : 256.00 0.12 1005636.88 1000175.57 1043715.48 00:37:09.530 00:37:09.787 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:09.787 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 414487 00:37:09.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (414487) - No such process 00:37:09.787 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 414487 00:37:09.787 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:09.787 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:09.787 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:09.787 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:09.787 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:09.787 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:09.787 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:09.788 rmmod nvme_tcp 00:37:09.788 rmmod nvme_fabrics 00:37:09.788 rmmod nvme_keyring 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 413545 ']' 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 413545 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 413545 ']' 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 413545 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 413545 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 413545' 00:37:09.788 killing process with pid 413545 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 413545 00:37:09.788 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 413545 00:37:10.047 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:10.048 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:10.048 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:10.048 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:10.048 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:37:10.048 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:10.048 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:37:10.048 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:10.048 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:10.048 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.048 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:10.048 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.955 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:11.955 00:37:11.955 real 0m12.320s 00:37:11.955 user 0m24.482s 00:37:11.955 sys 0m3.790s 00:37:11.955 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:11.955 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.955 ************************************ 00:37:11.955 END TEST nvmf_delete_subsystem 00:37:11.955 ************************************ 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:12.215 ************************************ 00:37:12.215 START TEST nvmf_host_management 00:37:12.215 ************************************ 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:12.215 * Looking for test storage... 00:37:12.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.215 --rc genhtml_branch_coverage=1 00:37:12.215 --rc genhtml_function_coverage=1 00:37:12.215 --rc genhtml_legend=1 00:37:12.215 --rc geninfo_all_blocks=1 00:37:12.215 --rc geninfo_unexecuted_blocks=1 00:37:12.215 00:37:12.215 ' 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.215 --rc genhtml_branch_coverage=1 00:37:12.215 --rc genhtml_function_coverage=1 00:37:12.215 --rc genhtml_legend=1 00:37:12.215 --rc geninfo_all_blocks=1 00:37:12.215 --rc geninfo_unexecuted_blocks=1 00:37:12.215 00:37:12.215 ' 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.215 --rc genhtml_branch_coverage=1 00:37:12.215 --rc genhtml_function_coverage=1 00:37:12.215 --rc genhtml_legend=1 00:37:12.215 --rc geninfo_all_blocks=1 00:37:12.215 --rc geninfo_unexecuted_blocks=1 00:37:12.215 00:37:12.215 ' 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.215 --rc genhtml_branch_coverage=1 00:37:12.215 --rc genhtml_function_coverage=1 00:37:12.215 --rc genhtml_legend=1 00:37:12.215 --rc geninfo_all_blocks=1 00:37:12.215 --rc geninfo_unexecuted_blocks=1 00:37:12.215 00:37:12.215 ' 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.215 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:12.216 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:14.750 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:14.750 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:14.750 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:14.750 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:37:14.750 00:37:14.750 --- 10.0.0.2 ping statistics --- 00:37:14.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.750 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:37:14.750 00:37:14.750 --- 10.0.0.1 ping statistics --- 00:37:14.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.750 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=416923 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 416923 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 416923 ']' 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 [2024-10-11 23:00:17.737825] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:14.750 [2024-10-11 23:00:17.738992] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:37:14.750 [2024-10-11 23:00:17.739046] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.750 [2024-10-11 23:00:17.805428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:14.750 [2024-10-11 23:00:17.857224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.750 [2024-10-11 23:00:17.857277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.750 [2024-10-11 23:00:17.857291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.750 [2024-10-11 23:00:17.857302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.750 [2024-10-11 23:00:17.857312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.750 [2024-10-11 23:00:17.858946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:14.750 [2024-10-11 23:00:17.858999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:14.750 [2024-10-11 23:00:17.859021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:14.750 [2024-10-11 23:00:17.859023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.750 [2024-10-11 23:00:17.952471] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:14.750 [2024-10-11 23:00:17.952720] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:14.750 [2024-10-11 23:00:17.953033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:14.750 [2024-10-11 23:00:17.953687] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:14.750 [2024-10-11 23:00:17.953943] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.750 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 [2024-10-11 23:00:18.003788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.009 Malloc0 00:37:15.009 [2024-10-11 23:00:18.083906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=416991 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 416991 /var/tmp/bdevperf.sock 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 416991 ']' 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:15.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:15.009 { 00:37:15.009 "params": { 00:37:15.009 "name": "Nvme$subsystem", 00:37:15.009 "trtype": "$TEST_TRANSPORT", 00:37:15.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:15.009 "adrfam": "ipv4", 00:37:15.009 "trsvcid": "$NVMF_PORT", 00:37:15.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:15.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:15.009 "hdgst": ${hdgst:-false}, 00:37:15.009 "ddgst": ${ddgst:-false} 00:37:15.009 }, 00:37:15.009 "method": "bdev_nvme_attach_controller" 00:37:15.009 } 00:37:15.009 EOF 00:37:15.009 )") 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:37:15.009 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:15.009 "params": { 00:37:15.009 "name": "Nvme0", 00:37:15.009 "trtype": "tcp", 00:37:15.009 "traddr": "10.0.0.2", 00:37:15.009 "adrfam": "ipv4", 00:37:15.009 "trsvcid": "4420", 00:37:15.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.009 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.009 "hdgst": false, 00:37:15.009 "ddgst": false 00:37:15.009 }, 00:37:15.009 "method": "bdev_nvme_attach_controller" 00:37:15.009 }' 00:37:15.010 [2024-10-11 23:00:18.169959] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:37:15.010 [2024-10-11 23:00:18.170048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416991 ] 00:37:15.010 [2024-10-11 23:00:18.231316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.268 [2024-10-11 23:00:18.278650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:15.268 Running I/O for 10 seconds... 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.268 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.526 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:15.526 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:15.526 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.785 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.785 [2024-10-11 23:00:18.851936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.851984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.785 [2024-10-11 23:00:18.852011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.852027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.785 [2024-10-11 23:00:18.852044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.852073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.785 [2024-10-11 23:00:18.852090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.852104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.785 [2024-10-11 23:00:18.852119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.852132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.785 [2024-10-11 23:00:18.852147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.852161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.785 [2024-10-11 23:00:18.852176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.852189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.785 [2024-10-11 23:00:18.852204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.852218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.785 [2024-10-11 23:00:18.852233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.852247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.785 [2024-10-11 23:00:18.852262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.852276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.785 [2024-10-11 23:00:18.852291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.852306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.785 [2024-10-11 23:00:18.852321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.852335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.785 [2024-10-11 23:00:18.852350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.785 [2024-10-11 23:00:18.852364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.852978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.852992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.786 [2024-10-11 23:00:18.853612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.786 [2024-10-11 23:00:18.853626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.787 [2024-10-11 23:00:18.853641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.787 [2024-10-11 23:00:18.853655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.787 [2024-10-11 23:00:18.853670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.787 [2024-10-11 23:00:18.853685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.787 [2024-10-11 23:00:18.853699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.787 [2024-10-11 23:00:18.853714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.787 [2024-10-11 23:00:18.853728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.787 [2024-10-11 23:00:18.853742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.787 [2024-10-11 23:00:18.853757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.787 [2024-10-11 23:00:18.853771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.787 [2024-10-11 23:00:18.853786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.787 [2024-10-11 23:00:18.853800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.787 [2024-10-11 23:00:18.853815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.787 [2024-10-11 23:00:18.853829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.787 [2024-10-11 23:00:18.853844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.787 [2024-10-11 23:00:18.853857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.787 [2024-10-11 23:00:18.853889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.787 [2024-10-11 23:00:18.853902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.787 [2024-10-11 23:00:18.853917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.787 [2024-10-11 23:00:18.853937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.787 [2024-10-11 23:00:18.854017] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2671e80 was disconnected and freed. reset controller. 00:37:15.787 [2024-10-11 23:00:18.855149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:15.787 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.787 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:15.787 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.787 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.787 task offset: 74496 on job bdev=Nvme0n1 fails 00:37:15.787 00:37:15.787 Latency(us) 00:37:15.787 [2024-10-11T21:00:19.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.787 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:15.787 Job: Nvme0n1 ended in about 0.40 seconds with error 00:37:15.787 Verification LBA range: start 0x0 length 0x400 00:37:15.787 Nvme0n1 : 0.40 1427.07 89.19 158.56 0.00 38829.69 2512.21 44467.39 00:37:15.787 [2024-10-11T21:00:19.055Z] =================================================================================================================== 00:37:15.787 [2024-10-11T21:00:19.055Z] Total : 1427.07 89.19 158.56 0.00 38829.69 2512.21 44467.39 00:37:15.787 [2024-10-11 23:00:18.857060] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:15.787 [2024-10-11 23:00:18.857088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2458e00 (9): Bad file descriptor 00:37:15.787 [2024-10-11 23:00:18.858206] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:15.787 [2024-10-11 23:00:18.858305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:15.787 [2024-10-11 23:00:18.858334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.787 [2024-10-11 23:00:18.858356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:15.787 [2024-10-11 23:00:18.858372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:15.787 [2024-10-11 23:00:18.858385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:15.787 [2024-10-11 23:00:18.858398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2458e00 00:37:15.787 [2024-10-11 23:00:18.858433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2458e00 (9): Bad file descriptor 00:37:15.787 [2024-10-11 23:00:18.858458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:15.787 [2024-10-11 23:00:18.858473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:15.787 [2024-10-11 23:00:18.858488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:15.787 [2024-10-11 23:00:18.858509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.787 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.787 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 416991 00:37:16.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (416991) - No such process 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:16.719 { 00:37:16.719 "params": { 00:37:16.719 "name": "Nvme$subsystem", 00:37:16.719 "trtype": "$TEST_TRANSPORT", 00:37:16.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:16.719 "adrfam": "ipv4", 00:37:16.719 "trsvcid": "$NVMF_PORT", 00:37:16.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:16.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:16.719 "hdgst": ${hdgst:-false}, 00:37:16.719 "ddgst": ${ddgst:-false} 00:37:16.719 }, 00:37:16.719 "method": "bdev_nvme_attach_controller" 00:37:16.719 } 00:37:16.719 EOF 00:37:16.719 )") 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:37:16.719 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:16.719 "params": { 00:37:16.719 "name": "Nvme0", 00:37:16.719 "trtype": "tcp", 00:37:16.719 "traddr": "10.0.0.2", 00:37:16.719 "adrfam": "ipv4", 00:37:16.719 "trsvcid": "4420", 00:37:16.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.719 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:16.719 "hdgst": false, 00:37:16.719 "ddgst": false 00:37:16.719 }, 00:37:16.719 "method": "bdev_nvme_attach_controller" 00:37:16.719 }' 00:37:16.719 [2024-10-11 23:00:19.916656] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:37:16.719 [2024-10-11 23:00:19.916741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417160 ] 00:37:16.719 [2024-10-11 23:00:19.981610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.976 [2024-10-11 23:00:20.035036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.234 Running I/O for 1 seconds... 00:37:18.168 1664.00 IOPS, 104.00 MiB/s 00:37:18.168 Latency(us) 00:37:18.168 [2024-10-11T21:00:21.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:18.168 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:18.168 Verification LBA range: start 0x0 length 0x400 00:37:18.168 Nvme0n1 : 1.02 1700.24 106.27 0.00 0.00 37028.83 4587.52 33593.27 00:37:18.168 [2024-10-11T21:00:21.436Z] =================================================================================================================== 00:37:18.168 [2024-10-11T21:00:21.436Z] Total : 1700.24 106.27 0.00 0.00 37028.83 4587.52 33593.27 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.426 rmmod nvme_tcp 00:37:18.426 rmmod nvme_fabrics 00:37:18.426 rmmod nvme_keyring 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 416923 ']' 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 416923 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 416923 ']' 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 416923 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 416923 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 416923' 00:37:18.426 killing process with pid 416923 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 416923 00:37:18.426 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 416923 00:37:18.686 [2024-10-11 23:00:21.724586] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:18.686 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:18.686 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:18.686 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:18.686 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:18.686 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:37:18.686 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:18.686 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:37:18.686 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:18.686 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:18.686 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.686 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:18.686 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.592 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:20.592 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:20.592 00:37:20.592 real 0m8.553s 00:37:20.592 user 0m16.358s 00:37:20.592 sys 0m3.697s 00:37:20.592 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:20.592 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:20.592 ************************************ 00:37:20.592 END TEST nvmf_host_management 00:37:20.592 ************************************ 00:37:20.592 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:20.592 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:20.592 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:20.592 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:20.592 ************************************ 00:37:20.592 START TEST nvmf_lvol 00:37:20.592 ************************************ 00:37:20.592 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:20.851 * Looking for test storage... 00:37:20.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:20.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.851 --rc genhtml_branch_coverage=1 00:37:20.851 --rc genhtml_function_coverage=1 00:37:20.851 --rc genhtml_legend=1 00:37:20.851 --rc geninfo_all_blocks=1 00:37:20.851 --rc geninfo_unexecuted_blocks=1 00:37:20.851 00:37:20.851 ' 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:20.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.851 --rc genhtml_branch_coverage=1 00:37:20.851 --rc genhtml_function_coverage=1 00:37:20.851 --rc genhtml_legend=1 00:37:20.851 --rc geninfo_all_blocks=1 00:37:20.851 --rc geninfo_unexecuted_blocks=1 00:37:20.851 00:37:20.851 ' 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:20.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.851 --rc genhtml_branch_coverage=1 00:37:20.851 --rc genhtml_function_coverage=1 00:37:20.851 --rc genhtml_legend=1 00:37:20.851 --rc geninfo_all_blocks=1 00:37:20.851 --rc geninfo_unexecuted_blocks=1 00:37:20.851 00:37:20.851 ' 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:20.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.851 --rc genhtml_branch_coverage=1 00:37:20.851 --rc genhtml_function_coverage=1 00:37:20.851 --rc genhtml_legend=1 00:37:20.851 --rc geninfo_all_blocks=1 00:37:20.851 --rc geninfo_unexecuted_blocks=1 00:37:20.851 00:37:20.851 ' 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.851 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.852 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.852 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.852 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:20.852 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:20.852 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.852 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:20.852 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.852 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:20.852 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:23.393 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:23.393 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:23.393 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:23.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:23.393 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:23.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:23.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:37:23.394 00:37:23.394 --- 10.0.0.2 ping statistics --- 00:37:23.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.394 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:23.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:23.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:37:23.394 00:37:23.394 --- 10.0.0.1 ping statistics --- 00:37:23.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.394 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=419340 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 419340 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 419340 ']' 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:23.394 [2024-10-11 23:00:26.265260] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:23.394 [2024-10-11 23:00:26.266348] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:37:23.394 [2024-10-11 23:00:26.266404] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:23.394 [2024-10-11 23:00:26.330254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:23.394 [2024-10-11 23:00:26.374133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:23.394 [2024-10-11 23:00:26.374188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:23.394 [2024-10-11 23:00:26.374202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:23.394 [2024-10-11 23:00:26.374213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:23.394 [2024-10-11 23:00:26.374222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:23.394 [2024-10-11 23:00:26.375640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.394 [2024-10-11 23:00:26.375704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:23.394 [2024-10-11 23:00:26.375706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.394 [2024-10-11 23:00:26.457928] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:23.394 [2024-10-11 23:00:26.458115] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:23.394 [2024-10-11 23:00:26.458119] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:23.394 [2024-10-11 23:00:26.458378] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:23.394 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:23.652 [2024-10-11 23:00:26.792386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:23.652 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:23.910 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:23.910 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:24.168 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:24.168 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:24.426 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:24.992 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c9a0708b-6128-4416-b69f-d46d13a179e2 00:37:24.992 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c9a0708b-6128-4416-b69f-d46d13a179e2 lvol 20 00:37:24.992 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b23927ad-80a0-4d6d-9706-c676f7ddaca8 00:37:24.992 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:25.556 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b23927ad-80a0-4d6d-9706-c676f7ddaca8 00:37:25.556 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:25.813 [2024-10-11 23:00:29.052514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:25.813 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:26.378 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=419761 00:37:26.378 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:26.378 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:27.311 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b23927ad-80a0-4d6d-9706-c676f7ddaca8 MY_SNAPSHOT 00:37:27.569 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7ec2e6b1-bc26-45ea-815b-5ee9a40efbce 00:37:27.569 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b23927ad-80a0-4d6d-9706-c676f7ddaca8 30 00:37:27.827 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7ec2e6b1-bc26-45ea-815b-5ee9a40efbce MY_CLONE 00:37:28.084 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a80cc017-69a6-4d60-9cf8-ed9cb7fa5cc9 00:37:28.084 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a80cc017-69a6-4d60-9cf8-ed9cb7fa5cc9 00:37:28.650 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 419761 00:37:36.764 Initializing NVMe Controllers 00:37:36.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:36.764 Controller IO queue size 128, less than required. 00:37:36.764 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:36.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:36.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:36.764 Initialization complete. Launching workers. 00:37:36.765 ======================================================== 00:37:36.765 Latency(us) 00:37:36.765 Device Information : IOPS MiB/s Average min max 00:37:36.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10301.90 40.24 12425.13 1206.73 62681.11 00:37:36.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10531.70 41.14 12156.26 3544.13 55604.66 00:37:36.765 ======================================================== 00:37:36.765 Total : 20833.60 81.38 12289.21 1206.73 62681.11 00:37:36.765 00:37:36.765 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:36.765 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b23927ad-80a0-4d6d-9706-c676f7ddaca8 00:37:37.023 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9a0708b-6128-4416-b69f-d46d13a179e2 00:37:37.281 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:37.281 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:37.281 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:37.281 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:37.281 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:37.281 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:37.281 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:37.281 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:37.281 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:37.281 rmmod nvme_tcp 00:37:37.539 rmmod nvme_fabrics 00:37:37.539 rmmod nvme_keyring 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 419340 ']' 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 419340 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 419340 ']' 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 419340 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 419340 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 419340' 00:37:37.539 killing process with pid 419340 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 419340 00:37:37.539 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 419340 00:37:37.799 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:37.799 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:37.799 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:37.799 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:37.799 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:37:37.799 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:37.799 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:37:37.799 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:37.799 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:37.799 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.799 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:37.799 23:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.705 23:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:39.705 00:37:39.705 real 0m19.045s 00:37:39.705 user 0m55.831s 00:37:39.705 sys 0m7.973s 00:37:39.705 23:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:39.705 23:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:39.705 ************************************ 00:37:39.705 END TEST nvmf_lvol 00:37:39.705 ************************************ 00:37:39.706 23:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:39.706 23:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:39.706 23:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:39.706 23:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:39.706 ************************************ 00:37:39.706 START TEST nvmf_lvs_grow 00:37:39.706 ************************************ 00:37:39.706 23:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:39.964 * Looking for test storage... 00:37:39.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:39.964 23:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:39.964 23:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:37:39.964 23:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:39.964 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:39.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.965 --rc genhtml_branch_coverage=1 00:37:39.965 --rc genhtml_function_coverage=1 00:37:39.965 --rc genhtml_legend=1 00:37:39.965 --rc geninfo_all_blocks=1 00:37:39.965 --rc geninfo_unexecuted_blocks=1 00:37:39.965 00:37:39.965 ' 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:39.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.965 --rc genhtml_branch_coverage=1 00:37:39.965 --rc genhtml_function_coverage=1 00:37:39.965 --rc genhtml_legend=1 00:37:39.965 --rc geninfo_all_blocks=1 00:37:39.965 --rc geninfo_unexecuted_blocks=1 00:37:39.965 00:37:39.965 ' 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:39.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.965 --rc genhtml_branch_coverage=1 00:37:39.965 --rc genhtml_function_coverage=1 00:37:39.965 --rc genhtml_legend=1 00:37:39.965 --rc geninfo_all_blocks=1 00:37:39.965 --rc geninfo_unexecuted_blocks=1 00:37:39.965 00:37:39.965 ' 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:39.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.965 --rc genhtml_branch_coverage=1 00:37:39.965 --rc genhtml_function_coverage=1 00:37:39.965 --rc genhtml_legend=1 00:37:39.965 --rc geninfo_all_blocks=1 00:37:39.965 --rc geninfo_unexecuted_blocks=1 00:37:39.965 00:37:39.965 ' 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:39.965 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:42.500 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:42.500 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:42.500 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:42.500 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:42.501 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:42.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:42.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:37:42.501 00:37:42.501 --- 10.0.0.2 ping statistics --- 00:37:42.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.501 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:42.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:42.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:37:42.501 00:37:42.501 --- 10.0.0.1 ping statistics --- 00:37:42.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.501 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=423013 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 423013 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 423013 ']' 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:42.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:42.501 [2024-10-11 23:00:45.348736] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:42.501 [2024-10-11 23:00:45.349817] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:37:42.501 [2024-10-11 23:00:45.349884] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:42.501 [2024-10-11 23:00:45.414009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.501 [2024-10-11 23:00:45.459272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:42.501 [2024-10-11 23:00:45.459323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:42.501 [2024-10-11 23:00:45.459343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:42.501 [2024-10-11 23:00:45.459354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:42.501 [2024-10-11 23:00:45.459364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:42.501 [2024-10-11 23:00:45.459981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.501 [2024-10-11 23:00:45.541040] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:42.501 [2024-10-11 23:00:45.541338] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:42.501 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:42.762 [2024-10-11 23:00:45.868576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:42.762 ************************************ 00:37:42.762 START TEST lvs_grow_clean 00:37:42.762 ************************************ 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:42.762 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:43.021 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:43.021 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:43.279 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c95c2901-f853-4460-9476-538e1f51305d 00:37:43.279 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c95c2901-f853-4460-9476-538e1f51305d 00:37:43.279 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:43.536 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:43.537 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:43.537 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c95c2901-f853-4460-9476-538e1f51305d lvol 150 00:37:43.794 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3f6de205-b9d0-4d62-8110-0f1150d005e4 00:37:43.794 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:43.794 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:44.053 [2024-10-11 23:00:47.292447] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:44.053 [2024-10-11 23:00:47.292589] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:44.053 true 00:37:44.053 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c95c2901-f853-4460-9476-538e1f51305d 00:37:44.053 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:44.618 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:44.619 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:44.619 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3f6de205-b9d0-4d62-8110-0f1150d005e4 00:37:44.876 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:45.135 [2024-10-11 23:00:48.380753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:45.135 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=423443 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 423443 /var/tmp/bdevperf.sock 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 423443 ']' 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:45.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:45.701 [2024-10-11 23:00:48.719328] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:37:45.701 [2024-10-11 23:00:48.719435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423443 ] 00:37:45.701 [2024-10-11 23:00:48.778690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.701 [2024-10-11 23:00:48.824078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:37:45.701 23:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:46.269 Nvme0n1 00:37:46.269 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:46.269 [ 00:37:46.269 { 00:37:46.269 "name": "Nvme0n1", 00:37:46.269 "aliases": [ 00:37:46.269 "3f6de205-b9d0-4d62-8110-0f1150d005e4" 00:37:46.269 ], 00:37:46.269 "product_name": "NVMe disk", 00:37:46.269 "block_size": 4096, 00:37:46.269 "num_blocks": 38912, 00:37:46.269 "uuid": "3f6de205-b9d0-4d62-8110-0f1150d005e4", 00:37:46.269 "numa_id": 0, 00:37:46.269 "assigned_rate_limits": { 00:37:46.269 "rw_ios_per_sec": 0, 00:37:46.269 "rw_mbytes_per_sec": 0, 00:37:46.269 "r_mbytes_per_sec": 0, 00:37:46.269 "w_mbytes_per_sec": 0 00:37:46.269 }, 00:37:46.269 "claimed": false, 00:37:46.269 "zoned": false, 00:37:46.269 "supported_io_types": { 00:37:46.269 "read": true, 00:37:46.269 "write": true, 00:37:46.269 "unmap": true, 00:37:46.269 "flush": true, 00:37:46.269 "reset": true, 00:37:46.269 "nvme_admin": true, 00:37:46.269 "nvme_io": true, 00:37:46.269 "nvme_io_md": false, 00:37:46.269 "write_zeroes": true, 00:37:46.269 "zcopy": false, 00:37:46.269 "get_zone_info": false, 00:37:46.269 "zone_management": false, 00:37:46.269 "zone_append": false, 00:37:46.269 "compare": true, 00:37:46.269 "compare_and_write": true, 00:37:46.269 "abort": true, 00:37:46.269 "seek_hole": false, 00:37:46.269 "seek_data": false, 00:37:46.269 "copy": true, 00:37:46.269 "nvme_iov_md": false 00:37:46.269 }, 00:37:46.269 "memory_domains": [ 00:37:46.269 { 00:37:46.269 "dma_device_id": "system", 00:37:46.269 "dma_device_type": 1 00:37:46.269 } 00:37:46.269 ], 00:37:46.269 "driver_specific": { 00:37:46.269 "nvme": [ 00:37:46.269 { 00:37:46.269 "trid": { 00:37:46.269 "trtype": "TCP", 00:37:46.269 "adrfam": "IPv4", 00:37:46.269 "traddr": "10.0.0.2", 00:37:46.269 "trsvcid": "4420", 00:37:46.269 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:46.269 }, 00:37:46.269 "ctrlr_data": { 00:37:46.269 "cntlid": 1, 00:37:46.269 "vendor_id": "0x8086", 00:37:46.269 "model_number": "SPDK bdev Controller", 00:37:46.269 "serial_number": "SPDK0", 00:37:46.269 "firmware_revision": "25.01", 00:37:46.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:46.269 "oacs": { 00:37:46.269 "security": 0, 00:37:46.269 "format": 0, 00:37:46.269 "firmware": 0, 00:37:46.269 "ns_manage": 0 00:37:46.269 }, 00:37:46.269 "multi_ctrlr": true, 00:37:46.269 "ana_reporting": false 00:37:46.269 }, 00:37:46.269 "vs": { 00:37:46.269 "nvme_version": "1.3" 00:37:46.269 }, 00:37:46.269 "ns_data": { 00:37:46.269 "id": 1, 00:37:46.269 "can_share": true 00:37:46.269 } 00:37:46.269 } 00:37:46.269 ], 00:37:46.269 "mp_policy": "active_passive" 00:37:46.269 } 00:37:46.269 } 00:37:46.269 ] 00:37:46.528 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=423575 00:37:46.528 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:46.528 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:46.528 Running I/O for 10 seconds... 00:37:47.464 Latency(us) 00:37:47.464 [2024-10-11T21:00:50.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.464 Nvme0n1 : 1.00 14567.00 56.90 0.00 0.00 0.00 0.00 0.00 00:37:47.464 [2024-10-11T21:00:50.732Z] =================================================================================================================== 00:37:47.464 [2024-10-11T21:00:50.732Z] Total : 14567.00 56.90 0.00 0.00 0.00 0.00 0.00 00:37:47.464 00:37:48.399 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c95c2901-f853-4460-9476-538e1f51305d 00:37:48.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:48.399 Nvme0n1 : 2.00 14796.00 57.80 0.00 0.00 0.00 0.00 0.00 00:37:48.399 [2024-10-11T21:00:51.667Z] =================================================================================================================== 00:37:48.399 [2024-10-11T21:00:51.667Z] Total : 14796.00 57.80 0.00 0.00 0.00 0.00 0.00 00:37:48.399 00:37:48.657 true 00:37:48.657 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c95c2901-f853-4460-9476-538e1f51305d 00:37:48.657 23:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:48.916 23:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:48.916 23:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:48.916 23:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 423575 00:37:49.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:49.482 Nvme0n1 : 3.00 14870.33 58.09 0.00 0.00 0.00 0.00 0.00 00:37:49.482 [2024-10-11T21:00:52.750Z] =================================================================================================================== 00:37:49.482 [2024-10-11T21:00:52.750Z] Total : 14870.33 58.09 0.00 0.00 0.00 0.00 0.00 00:37:49.482 00:37:50.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:50.417 Nvme0n1 : 4.00 14953.00 58.41 0.00 0.00 0.00 0.00 0.00 00:37:50.417 [2024-10-11T21:00:53.685Z] =================================================================================================================== 00:37:50.417 [2024-10-11T21:00:53.685Z] Total : 14953.00 58.41 0.00 0.00 0.00 0.00 0.00 00:37:50.417 00:37:51.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.791 Nvme0n1 : 5.00 15042.60 58.76 0.00 0.00 0.00 0.00 0.00 00:37:51.791 [2024-10-11T21:00:55.059Z] =================================================================================================================== 00:37:51.791 [2024-10-11T21:00:55.059Z] Total : 15042.60 58.76 0.00 0.00 0.00 0.00 0.00 00:37:51.791 00:37:52.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:52.724 Nvme0n1 : 6.00 15132.17 59.11 0.00 0.00 0.00 0.00 0.00 00:37:52.724 [2024-10-11T21:00:55.992Z] =================================================================================================================== 00:37:52.724 [2024-10-11T21:00:55.992Z] Total : 15132.17 59.11 0.00 0.00 0.00 0.00 0.00 00:37:52.724 00:37:53.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:53.658 Nvme0n1 : 7.00 15188.00 59.33 0.00 0.00 0.00 0.00 0.00 00:37:53.658 [2024-10-11T21:00:56.926Z] =================================================================================================================== 00:37:53.658 [2024-10-11T21:00:56.926Z] Total : 15188.00 59.33 0.00 0.00 0.00 0.00 0.00 00:37:53.658 00:37:54.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:54.591 Nvme0n1 : 8.00 15238.25 59.52 0.00 0.00 0.00 0.00 0.00 00:37:54.591 [2024-10-11T21:00:57.859Z] =================================================================================================================== 00:37:54.591 [2024-10-11T21:00:57.859Z] Total : 15238.25 59.52 0.00 0.00 0.00 0.00 0.00 00:37:54.591 00:37:55.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:55.652 Nvme0n1 : 9.00 15282.78 59.70 0.00 0.00 0.00 0.00 0.00 00:37:55.652 [2024-10-11T21:00:58.920Z] =================================================================================================================== 00:37:55.652 [2024-10-11T21:00:58.920Z] Total : 15282.78 59.70 0.00 0.00 0.00 0.00 0.00 00:37:55.652 00:37:56.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:56.598 Nvme0n1 : 10.00 15306.60 59.79 0.00 0.00 0.00 0.00 0.00 00:37:56.598 [2024-10-11T21:00:59.866Z] =================================================================================================================== 00:37:56.598 [2024-10-11T21:00:59.866Z] Total : 15306.60 59.79 0.00 0.00 0.00 0.00 0.00 00:37:56.598 00:37:56.598 00:37:56.598 Latency(us) 00:37:56.598 [2024-10-11T21:00:59.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:56.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:56.598 Nvme0n1 : 10.01 15308.20 59.80 0.00 0.00 8356.82 4903.06 18350.08 00:37:56.598 [2024-10-11T21:00:59.866Z] =================================================================================================================== 00:37:56.598 [2024-10-11T21:00:59.866Z] Total : 15308.20 59.80 0.00 0.00 8356.82 4903.06 18350.08 00:37:56.598 { 00:37:56.598 "results": [ 00:37:56.598 { 00:37:56.598 "job": "Nvme0n1", 00:37:56.598 "core_mask": "0x2", 00:37:56.598 "workload": "randwrite", 00:37:56.598 "status": "finished", 00:37:56.598 "queue_depth": 128, 00:37:56.598 "io_size": 4096, 00:37:56.598 "runtime": 10.007316, 00:37:56.598 "iops": 15308.200520499202, 00:37:56.598 "mibps": 59.79765828320001, 00:37:56.598 "io_failed": 0, 00:37:56.598 "io_timeout": 0, 00:37:56.598 "avg_latency_us": 8356.822745770432, 00:37:56.598 "min_latency_us": 4903.063703703704, 00:37:56.598 "max_latency_us": 18350.08 00:37:56.598 } 00:37:56.598 ], 00:37:56.598 "core_count": 1 00:37:56.598 } 00:37:56.598 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 423443 00:37:56.598 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 423443 ']' 00:37:56.598 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 423443 00:37:56.598 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:37:56.598 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:56.598 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 423443 00:37:56.598 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:56.598 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:56.598 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 423443' 00:37:56.598 killing process with pid 423443 00:37:56.598 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 423443 00:37:56.598 Received shutdown signal, test time was about 10.000000 seconds 00:37:56.598 00:37:56.598 Latency(us) 00:37:56.598 [2024-10-11T21:00:59.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:56.598 [2024-10-11T21:00:59.866Z] =================================================================================================================== 00:37:56.598 [2024-10-11T21:00:59.866Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:56.598 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 423443 00:37:56.856 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:57.114 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:57.373 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c95c2901-f853-4460-9476-538e1f51305d 00:37:57.373 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:57.631 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:57.631 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:57.631 23:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:57.889 [2024-10-11 23:01:01.032504] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:57.889 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c95c2901-f853-4460-9476-538e1f51305d 00:37:57.889 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:37:57.889 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c95c2901-f853-4460-9476-538e1f51305d 00:37:57.889 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:57.889 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:57.889 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:57.889 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:57.889 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:57.889 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:57.889 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:57.889 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:57.889 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c95c2901-f853-4460-9476-538e1f51305d 00:37:58.147 request: 00:37:58.147 { 00:37:58.147 "uuid": "c95c2901-f853-4460-9476-538e1f51305d", 00:37:58.147 "method": "bdev_lvol_get_lvstores", 00:37:58.147 "req_id": 1 00:37:58.147 } 00:37:58.147 Got JSON-RPC error response 00:37:58.147 response: 00:37:58.147 { 00:37:58.147 "code": -19, 00:37:58.147 "message": "No such device" 00:37:58.147 } 00:37:58.147 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:37:58.147 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:58.147 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:58.147 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:58.147 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:58.406 aio_bdev 00:37:58.406 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3f6de205-b9d0-4d62-8110-0f1150d005e4 00:37:58.406 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=3f6de205-b9d0-4d62-8110-0f1150d005e4 00:37:58.406 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:58.406 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:37:58.406 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:58.406 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:58.406 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:58.664 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3f6de205-b9d0-4d62-8110-0f1150d005e4 -t 2000 00:37:58.922 [ 00:37:58.922 { 00:37:58.922 "name": "3f6de205-b9d0-4d62-8110-0f1150d005e4", 00:37:58.922 "aliases": [ 00:37:58.922 "lvs/lvol" 00:37:58.922 ], 00:37:58.922 "product_name": "Logical Volume", 00:37:58.922 "block_size": 4096, 00:37:58.922 "num_blocks": 38912, 00:37:58.922 "uuid": "3f6de205-b9d0-4d62-8110-0f1150d005e4", 00:37:58.922 "assigned_rate_limits": { 00:37:58.922 "rw_ios_per_sec": 0, 00:37:58.922 "rw_mbytes_per_sec": 0, 00:37:58.922 "r_mbytes_per_sec": 0, 00:37:58.922 "w_mbytes_per_sec": 0 00:37:58.922 }, 00:37:58.922 "claimed": false, 00:37:58.922 "zoned": false, 00:37:58.922 "supported_io_types": { 00:37:58.922 "read": true, 00:37:58.922 "write": true, 00:37:58.922 "unmap": true, 00:37:58.922 "flush": false, 00:37:58.922 "reset": true, 00:37:58.922 "nvme_admin": false, 00:37:58.922 "nvme_io": false, 00:37:58.922 "nvme_io_md": false, 00:37:58.922 "write_zeroes": true, 00:37:58.922 "zcopy": false, 00:37:58.922 "get_zone_info": false, 00:37:58.922 "zone_management": false, 00:37:58.922 "zone_append": false, 00:37:58.922 "compare": false, 00:37:58.923 "compare_and_write": false, 00:37:58.923 "abort": false, 00:37:58.923 "seek_hole": true, 00:37:58.923 "seek_data": true, 00:37:58.923 "copy": false, 00:37:58.923 "nvme_iov_md": false 00:37:58.923 }, 00:37:58.923 "driver_specific": { 00:37:58.923 "lvol": { 00:37:58.923 "lvol_store_uuid": "c95c2901-f853-4460-9476-538e1f51305d", 00:37:58.923 "base_bdev": "aio_bdev", 00:37:58.923 "thin_provision": false, 00:37:58.923 "num_allocated_clusters": 38, 00:37:58.923 "snapshot": false, 00:37:58.923 "clone": false, 00:37:58.923 "esnap_clone": false 00:37:58.923 } 00:37:58.923 } 00:37:58.923 } 00:37:58.923 ] 00:37:58.923 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:37:58.923 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c95c2901-f853-4460-9476-538e1f51305d 00:37:58.923 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:59.181 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:59.181 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c95c2901-f853-4460-9476-538e1f51305d 00:37:59.181 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:59.747 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:59.747 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3f6de205-b9d0-4d62-8110-0f1150d005e4 00:37:59.747 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c95c2901-f853-4460-9476-538e1f51305d 00:38:00.314 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:00.314 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:00.572 00:38:00.572 real 0m17.689s 00:38:00.572 user 0m17.292s 00:38:00.572 sys 0m1.791s 00:38:00.572 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:00.572 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:00.572 ************************************ 00:38:00.573 END TEST lvs_grow_clean 00:38:00.573 ************************************ 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:00.573 ************************************ 00:38:00.573 START TEST lvs_grow_dirty 00:38:00.573 ************************************ 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:00.573 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:00.831 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:00.831 23:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:01.089 23:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:01.089 23:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:01.090 23:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:01.348 23:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:01.348 23:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:01.348 23:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0415eaec-8c2c-4750-9b77-e427f2f092bf lvol 150 00:38:01.606 23:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6f528eb9-5bf6-488e-9e98-5fb6e09fde34 00:38:01.606 23:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:01.606 23:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:01.864 [2024-10-11 23:01:05.056456] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:01.864 [2024-10-11 23:01:05.056596] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:01.864 true 00:38:01.864 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:01.865 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:02.123 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:02.123 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:02.381 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6f528eb9-5bf6-488e-9e98-5fb6e09fde34 00:38:02.639 23:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:02.896 [2024-10-11 23:01:06.145154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:02.896 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:03.461 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=425495 00:38:03.461 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:03.462 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:03.462 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 425495 /var/tmp/bdevperf.sock 00:38:03.462 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 425495 ']' 00:38:03.462 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:03.462 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:03.462 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:03.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:03.462 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:03.462 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:03.462 [2024-10-11 23:01:06.475380] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:38:03.462 [2024-10-11 23:01:06.475449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425495 ] 00:38:03.462 [2024-10-11 23:01:06.535173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.462 [2024-10-11 23:01:06.583358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:03.462 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:03.462 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:03.462 23:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:04.028 Nvme0n1 00:38:04.028 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:04.286 [ 00:38:04.286 { 00:38:04.286 "name": "Nvme0n1", 00:38:04.286 "aliases": [ 00:38:04.286 "6f528eb9-5bf6-488e-9e98-5fb6e09fde34" 00:38:04.286 ], 00:38:04.286 "product_name": "NVMe disk", 00:38:04.286 "block_size": 4096, 00:38:04.286 "num_blocks": 38912, 00:38:04.286 "uuid": "6f528eb9-5bf6-488e-9e98-5fb6e09fde34", 00:38:04.286 "numa_id": 0, 00:38:04.286 "assigned_rate_limits": { 00:38:04.286 "rw_ios_per_sec": 0, 00:38:04.286 "rw_mbytes_per_sec": 0, 00:38:04.286 "r_mbytes_per_sec": 0, 00:38:04.286 "w_mbytes_per_sec": 0 00:38:04.286 }, 00:38:04.286 "claimed": false, 00:38:04.286 "zoned": false, 00:38:04.286 "supported_io_types": { 00:38:04.286 "read": true, 00:38:04.286 "write": true, 00:38:04.286 "unmap": true, 00:38:04.286 "flush": true, 00:38:04.286 "reset": true, 00:38:04.286 "nvme_admin": true, 00:38:04.286 "nvme_io": true, 00:38:04.286 "nvme_io_md": false, 00:38:04.286 "write_zeroes": true, 00:38:04.286 "zcopy": false, 00:38:04.286 "get_zone_info": false, 00:38:04.286 "zone_management": false, 00:38:04.286 "zone_append": false, 00:38:04.286 "compare": true, 00:38:04.286 "compare_and_write": true, 00:38:04.286 "abort": true, 00:38:04.286 "seek_hole": false, 00:38:04.286 "seek_data": false, 00:38:04.286 "copy": true, 00:38:04.286 "nvme_iov_md": false 00:38:04.286 }, 00:38:04.286 "memory_domains": [ 00:38:04.286 { 00:38:04.286 "dma_device_id": "system", 00:38:04.286 "dma_device_type": 1 00:38:04.286 } 00:38:04.286 ], 00:38:04.286 "driver_specific": { 00:38:04.286 "nvme": [ 00:38:04.286 { 00:38:04.286 "trid": { 00:38:04.286 "trtype": "TCP", 00:38:04.286 "adrfam": "IPv4", 00:38:04.286 "traddr": "10.0.0.2", 00:38:04.286 "trsvcid": "4420", 00:38:04.286 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:04.286 }, 00:38:04.286 "ctrlr_data": { 00:38:04.286 "cntlid": 1, 00:38:04.286 "vendor_id": "0x8086", 00:38:04.286 "model_number": "SPDK bdev Controller", 00:38:04.286 "serial_number": "SPDK0", 00:38:04.286 "firmware_revision": "25.01", 00:38:04.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:04.286 "oacs": { 00:38:04.286 "security": 0, 00:38:04.286 "format": 0, 00:38:04.286 "firmware": 0, 00:38:04.286 "ns_manage": 0 00:38:04.286 }, 00:38:04.286 "multi_ctrlr": true, 00:38:04.286 "ana_reporting": false 00:38:04.286 }, 00:38:04.286 "vs": { 00:38:04.286 "nvme_version": "1.3" 00:38:04.286 }, 00:38:04.286 "ns_data": { 00:38:04.286 "id": 1, 00:38:04.286 "can_share": true 00:38:04.286 } 00:38:04.287 } 00:38:04.287 ], 00:38:04.287 "mp_policy": "active_passive" 00:38:04.287 } 00:38:04.287 } 00:38:04.287 ] 00:38:04.287 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=425622 00:38:04.287 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:04.287 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:04.287 Running I/O for 10 seconds... 00:38:05.222 Latency(us) 00:38:05.222 [2024-10-11T21:01:08.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.222 Nvme0n1 : 1.00 14947.00 58.39 0.00 0.00 0.00 0.00 0.00 00:38:05.222 [2024-10-11T21:01:08.490Z] =================================================================================================================== 00:38:05.222 [2024-10-11T21:01:08.490Z] Total : 14947.00 58.39 0.00 0.00 0.00 0.00 0.00 00:38:05.222 00:38:06.155 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:06.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:06.413 Nvme0n1 : 2.00 15143.50 59.15 0.00 0.00 0.00 0.00 0.00 00:38:06.413 [2024-10-11T21:01:09.681Z] =================================================================================================================== 00:38:06.413 [2024-10-11T21:01:09.681Z] Total : 15143.50 59.15 0.00 0.00 0.00 0.00 0.00 00:38:06.413 00:38:06.413 true 00:38:06.413 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:06.413 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:06.979 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:06.979 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:06.979 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 425622 00:38:07.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:07.237 Nvme0n1 : 3.00 15130.00 59.10 0.00 0.00 0.00 0.00 0.00 00:38:07.237 [2024-10-11T21:01:10.505Z] =================================================================================================================== 00:38:07.237 [2024-10-11T21:01:10.506Z] Total : 15130.00 59.10 0.00 0.00 0.00 0.00 0.00 00:38:07.238 00:38:08.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:08.611 Nvme0n1 : 4.00 15178.00 59.29 0.00 0.00 0.00 0.00 0.00 00:38:08.611 [2024-10-11T21:01:11.879Z] =================================================================================================================== 00:38:08.611 [2024-10-11T21:01:11.879Z] Total : 15178.00 59.29 0.00 0.00 0.00 0.00 0.00 00:38:08.611 00:38:09.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:09.544 Nvme0n1 : 5.00 15234.20 59.51 0.00 0.00 0.00 0.00 0.00 00:38:09.544 [2024-10-11T21:01:12.812Z] =================================================================================================================== 00:38:09.545 [2024-10-11T21:01:12.813Z] Total : 15234.20 59.51 0.00 0.00 0.00 0.00 0.00 00:38:09.545 00:38:10.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:10.478 Nvme0n1 : 6.00 15306.83 59.79 0.00 0.00 0.00 0.00 0.00 00:38:10.478 [2024-10-11T21:01:13.746Z] =================================================================================================================== 00:38:10.478 [2024-10-11T21:01:13.746Z] Total : 15306.83 59.79 0.00 0.00 0.00 0.00 0.00 00:38:10.478 00:38:11.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:11.411 Nvme0n1 : 7.00 15327.00 59.87 0.00 0.00 0.00 0.00 0.00 00:38:11.411 [2024-10-11T21:01:14.679Z] =================================================================================================================== 00:38:11.411 [2024-10-11T21:01:14.679Z] Total : 15327.00 59.87 0.00 0.00 0.00 0.00 0.00 00:38:11.411 00:38:12.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:12.345 Nvme0n1 : 8.00 15365.12 60.02 0.00 0.00 0.00 0.00 0.00 00:38:12.345 [2024-10-11T21:01:15.613Z] =================================================================================================================== 00:38:12.345 [2024-10-11T21:01:15.613Z] Total : 15365.12 60.02 0.00 0.00 0.00 0.00 0.00 00:38:12.345 00:38:13.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.277 Nvme0n1 : 9.00 15410.67 60.20 0.00 0.00 0.00 0.00 0.00 00:38:13.277 [2024-10-11T21:01:16.545Z] =================================================================================================================== 00:38:13.277 [2024-10-11T21:01:16.545Z] Total : 15410.67 60.20 0.00 0.00 0.00 0.00 0.00 00:38:13.277 00:38:14.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.211 Nvme0n1 : 10.00 15452.10 60.36 0.00 0.00 0.00 0.00 0.00 00:38:14.211 [2024-10-11T21:01:17.479Z] =================================================================================================================== 00:38:14.211 [2024-10-11T21:01:17.479Z] Total : 15452.10 60.36 0.00 0.00 0.00 0.00 0.00 00:38:14.211 00:38:14.211 00:38:14.211 Latency(us) 00:38:14.211 [2024-10-11T21:01:17.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.211 Nvme0n1 : 10.01 15450.43 60.35 0.00 0.00 8280.14 4320.52 17864.63 00:38:14.211 [2024-10-11T21:01:17.479Z] =================================================================================================================== 00:38:14.211 [2024-10-11T21:01:17.479Z] Total : 15450.43 60.35 0.00 0.00 8280.14 4320.52 17864.63 00:38:14.211 { 00:38:14.211 "results": [ 00:38:14.211 { 00:38:14.211 "job": "Nvme0n1", 00:38:14.211 "core_mask": "0x2", 00:38:14.211 "workload": "randwrite", 00:38:14.211 "status": "finished", 00:38:14.211 "queue_depth": 128, 00:38:14.211 "io_size": 4096, 00:38:14.211 "runtime": 10.009364, 00:38:14.211 "iops": 15450.432215273619, 00:38:14.211 "mibps": 60.35325084091257, 00:38:14.211 "io_failed": 0, 00:38:14.211 "io_timeout": 0, 00:38:14.211 "avg_latency_us": 8280.13676282468, 00:38:14.211 "min_latency_us": 4320.521481481482, 00:38:14.211 "max_latency_us": 17864.62814814815 00:38:14.211 } 00:38:14.211 ], 00:38:14.211 "core_count": 1 00:38:14.211 } 00:38:14.470 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 425495 00:38:14.470 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 425495 ']' 00:38:14.470 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 425495 00:38:14.470 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:38:14.470 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:14.470 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 425495 00:38:14.470 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:14.470 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:14.470 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 425495' 00:38:14.470 killing process with pid 425495 00:38:14.470 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 425495 00:38:14.470 Received shutdown signal, test time was about 10.000000 seconds 00:38:14.470 00:38:14.470 Latency(us) 00:38:14.470 [2024-10-11T21:01:17.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.470 [2024-10-11T21:01:17.738Z] =================================================================================================================== 00:38:14.470 [2024-10-11T21:01:17.738Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:14.470 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 425495 00:38:14.470 23:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:15.037 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:15.295 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:15.295 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 423013 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 423013 00:38:15.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 423013 Killed "${NVMF_APP[@]}" "$@" 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=426939 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 426939 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 426939 ']' 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:15.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:15.553 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:15.553 [2024-10-11 23:01:18.675583] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:15.553 [2024-10-11 23:01:18.676681] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:38:15.553 [2024-10-11 23:01:18.676735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:15.553 [2024-10-11 23:01:18.741227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:15.553 [2024-10-11 23:01:18.785078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:15.553 [2024-10-11 23:01:18.785148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:15.553 [2024-10-11 23:01:18.785162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:15.553 [2024-10-11 23:01:18.785173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:15.553 [2024-10-11 23:01:18.785182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:15.553 [2024-10-11 23:01:18.785732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:15.812 [2024-10-11 23:01:18.869112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:15.812 [2024-10-11 23:01:18.869409] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:15.812 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:15.812 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:15.812 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:15.812 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:15.812 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:15.812 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:15.812 23:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:16.070 [2024-10-11 23:01:19.264344] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:16.070 [2024-10-11 23:01:19.264470] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:16.070 [2024-10-11 23:01:19.264517] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:16.070 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:16.070 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6f528eb9-5bf6-488e-9e98-5fb6e09fde34 00:38:16.070 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=6f528eb9-5bf6-488e-9e98-5fb6e09fde34 00:38:16.070 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:16.070 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:16.070 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:16.070 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:16.070 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:16.329 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6f528eb9-5bf6-488e-9e98-5fb6e09fde34 -t 2000 00:38:16.587 [ 00:38:16.587 { 00:38:16.587 "name": "6f528eb9-5bf6-488e-9e98-5fb6e09fde34", 00:38:16.587 "aliases": [ 00:38:16.587 "lvs/lvol" 00:38:16.587 ], 00:38:16.587 "product_name": "Logical Volume", 00:38:16.587 "block_size": 4096, 00:38:16.587 "num_blocks": 38912, 00:38:16.587 "uuid": "6f528eb9-5bf6-488e-9e98-5fb6e09fde34", 00:38:16.587 "assigned_rate_limits": { 00:38:16.587 "rw_ios_per_sec": 0, 00:38:16.587 "rw_mbytes_per_sec": 0, 00:38:16.587 "r_mbytes_per_sec": 0, 00:38:16.587 "w_mbytes_per_sec": 0 00:38:16.587 }, 00:38:16.587 "claimed": false, 00:38:16.587 "zoned": false, 00:38:16.587 "supported_io_types": { 00:38:16.587 "read": true, 00:38:16.587 "write": true, 00:38:16.587 "unmap": true, 00:38:16.587 "flush": false, 00:38:16.587 "reset": true, 00:38:16.587 "nvme_admin": false, 00:38:16.587 "nvme_io": false, 00:38:16.587 "nvme_io_md": false, 00:38:16.587 "write_zeroes": true, 00:38:16.587 "zcopy": false, 00:38:16.587 "get_zone_info": false, 00:38:16.587 "zone_management": false, 00:38:16.587 "zone_append": false, 00:38:16.587 "compare": false, 00:38:16.587 "compare_and_write": false, 00:38:16.587 "abort": false, 00:38:16.587 "seek_hole": true, 00:38:16.587 "seek_data": true, 00:38:16.587 "copy": false, 00:38:16.587 "nvme_iov_md": false 00:38:16.587 }, 00:38:16.587 "driver_specific": { 00:38:16.587 "lvol": { 00:38:16.587 "lvol_store_uuid": "0415eaec-8c2c-4750-9b77-e427f2f092bf", 00:38:16.587 "base_bdev": "aio_bdev", 00:38:16.587 "thin_provision": false, 00:38:16.587 "num_allocated_clusters": 38, 00:38:16.587 "snapshot": false, 00:38:16.587 "clone": false, 00:38:16.587 "esnap_clone": false 00:38:16.587 } 00:38:16.587 } 00:38:16.587 } 00:38:16.587 ] 00:38:16.587 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:16.587 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:16.587 23:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:17.153 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:17.153 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:17.153 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:17.412 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:17.412 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:17.670 [2024-10-11 23:01:20.718269] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:17.670 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:17.670 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:38:17.670 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:17.670 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:17.670 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:17.670 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:17.670 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:17.670 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:17.670 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:17.670 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:17.670 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:17.670 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:17.928 request: 00:38:17.928 { 00:38:17.928 "uuid": "0415eaec-8c2c-4750-9b77-e427f2f092bf", 00:38:17.928 "method": "bdev_lvol_get_lvstores", 00:38:17.928 "req_id": 1 00:38:17.928 } 00:38:17.928 Got JSON-RPC error response 00:38:17.928 response: 00:38:17.928 { 00:38:17.928 "code": -19, 00:38:17.928 "message": "No such device" 00:38:17.928 } 00:38:17.928 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:38:17.928 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:17.928 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:17.928 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:17.928 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:18.187 aio_bdev 00:38:18.187 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6f528eb9-5bf6-488e-9e98-5fb6e09fde34 00:38:18.187 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=6f528eb9-5bf6-488e-9e98-5fb6e09fde34 00:38:18.187 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:18.187 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:18.187 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:18.187 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:18.187 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:18.445 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6f528eb9-5bf6-488e-9e98-5fb6e09fde34 -t 2000 00:38:18.703 [ 00:38:18.703 { 00:38:18.703 "name": "6f528eb9-5bf6-488e-9e98-5fb6e09fde34", 00:38:18.703 "aliases": [ 00:38:18.703 "lvs/lvol" 00:38:18.703 ], 00:38:18.703 "product_name": "Logical Volume", 00:38:18.703 "block_size": 4096, 00:38:18.703 "num_blocks": 38912, 00:38:18.703 "uuid": "6f528eb9-5bf6-488e-9e98-5fb6e09fde34", 00:38:18.703 "assigned_rate_limits": { 00:38:18.703 "rw_ios_per_sec": 0, 00:38:18.703 "rw_mbytes_per_sec": 0, 00:38:18.703 "r_mbytes_per_sec": 0, 00:38:18.703 "w_mbytes_per_sec": 0 00:38:18.703 }, 00:38:18.703 "claimed": false, 00:38:18.703 "zoned": false, 00:38:18.703 "supported_io_types": { 00:38:18.703 "read": true, 00:38:18.703 "write": true, 00:38:18.703 "unmap": true, 00:38:18.703 "flush": false, 00:38:18.703 "reset": true, 00:38:18.703 "nvme_admin": false, 00:38:18.703 "nvme_io": false, 00:38:18.703 "nvme_io_md": false, 00:38:18.703 "write_zeroes": true, 00:38:18.703 "zcopy": false, 00:38:18.703 "get_zone_info": false, 00:38:18.703 "zone_management": false, 00:38:18.703 "zone_append": false, 00:38:18.703 "compare": false, 00:38:18.703 "compare_and_write": false, 00:38:18.703 "abort": false, 00:38:18.703 "seek_hole": true, 00:38:18.703 "seek_data": true, 00:38:18.703 "copy": false, 00:38:18.703 "nvme_iov_md": false 00:38:18.703 }, 00:38:18.703 "driver_specific": { 00:38:18.703 "lvol": { 00:38:18.703 "lvol_store_uuid": "0415eaec-8c2c-4750-9b77-e427f2f092bf", 00:38:18.703 "base_bdev": "aio_bdev", 00:38:18.703 "thin_provision": false, 00:38:18.703 "num_allocated_clusters": 38, 00:38:18.703 "snapshot": false, 00:38:18.703 "clone": false, 00:38:18.703 "esnap_clone": false 00:38:18.703 } 00:38:18.703 } 00:38:18.703 } 00:38:18.703 ] 00:38:18.703 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:18.703 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:18.703 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:18.962 23:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:18.962 23:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:18.962 23:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:19.220 23:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:19.220 23:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6f528eb9-5bf6-488e-9e98-5fb6e09fde34 00:38:19.478 23:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0415eaec-8c2c-4750-9b77-e427f2f092bf 00:38:19.737 23:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:19.995 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:19.995 00:38:19.995 real 0m19.587s 00:38:19.995 user 0m36.576s 00:38:19.995 sys 0m4.720s 00:38:19.995 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:19.995 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:19.995 ************************************ 00:38:19.995 END TEST lvs_grow_dirty 00:38:19.995 ************************************ 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:20.253 nvmf_trace.0 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:20.253 rmmod nvme_tcp 00:38:20.253 rmmod nvme_fabrics 00:38:20.253 rmmod nvme_keyring 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 426939 ']' 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 426939 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 426939 ']' 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 426939 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 426939 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 426939' 00:38:20.253 killing process with pid 426939 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 426939 00:38:20.253 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 426939 00:38:20.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:20.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:20.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:20.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:20.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:38:20.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:38:20.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:20.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:20.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:20.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:20.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:20.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.421 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:22.421 00:38:22.421 real 0m42.679s 00:38:22.421 user 0m55.619s 00:38:22.421 sys 0m8.434s 00:38:22.421 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:22.421 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:22.421 ************************************ 00:38:22.421 END TEST nvmf_lvs_grow 00:38:22.421 ************************************ 00:38:22.421 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:22.421 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:22.421 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:22.421 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:22.421 ************************************ 00:38:22.421 START TEST nvmf_bdev_io_wait 00:38:22.421 ************************************ 00:38:22.421 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:22.680 * Looking for test storage... 00:38:22.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:22.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.680 --rc genhtml_branch_coverage=1 00:38:22.680 --rc genhtml_function_coverage=1 00:38:22.680 --rc genhtml_legend=1 00:38:22.680 --rc geninfo_all_blocks=1 00:38:22.680 --rc geninfo_unexecuted_blocks=1 00:38:22.680 00:38:22.680 ' 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:22.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.680 --rc genhtml_branch_coverage=1 00:38:22.680 --rc genhtml_function_coverage=1 00:38:22.680 --rc genhtml_legend=1 00:38:22.680 --rc geninfo_all_blocks=1 00:38:22.680 --rc geninfo_unexecuted_blocks=1 00:38:22.680 00:38:22.680 ' 00:38:22.680 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:22.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.680 --rc genhtml_branch_coverage=1 00:38:22.681 --rc genhtml_function_coverage=1 00:38:22.681 --rc genhtml_legend=1 00:38:22.681 --rc geninfo_all_blocks=1 00:38:22.681 --rc geninfo_unexecuted_blocks=1 00:38:22.681 00:38:22.681 ' 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:22.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.681 --rc genhtml_branch_coverage=1 00:38:22.681 --rc genhtml_function_coverage=1 00:38:22.681 --rc genhtml_legend=1 00:38:22.681 --rc geninfo_all_blocks=1 00:38:22.681 --rc geninfo_unexecuted_blocks=1 00:38:22.681 00:38:22.681 ' 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:22.681 23:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:24.584 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:24.843 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:24.843 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:24.843 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:24.844 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:24.844 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:24.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:24.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:38:24.844 00:38:24.844 --- 10.0.0.2 ping statistics --- 00:38:24.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:24.844 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:24.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:24.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:38:24.844 00:38:24.844 --- 10.0.0.1 ping statistics --- 00:38:24.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:24.844 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:24.844 23:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=429459 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 429459 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 429459 ']' 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:24.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:24.844 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:24.844 [2024-10-11 23:01:28.054225] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:24.844 [2024-10-11 23:01:28.055276] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:38:24.844 [2024-10-11 23:01:28.055345] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:25.103 [2024-10-11 23:01:28.122333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:25.103 [2024-10-11 23:01:28.172383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:25.103 [2024-10-11 23:01:28.172451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:25.103 [2024-10-11 23:01:28.172475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:25.103 [2024-10-11 23:01:28.172485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:25.103 [2024-10-11 23:01:28.172495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:25.103 [2024-10-11 23:01:28.174128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:25.103 [2024-10-11 23:01:28.174195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:25.103 [2024-10-11 23:01:28.174262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.103 [2024-10-11 23:01:28.174259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:25.103 [2024-10-11 23:01:28.174772] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.103 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:25.362 [2024-10-11 23:01:28.382051] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:25.362 [2024-10-11 23:01:28.382244] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:25.362 [2024-10-11 23:01:28.383157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:25.362 [2024-10-11 23:01:28.383994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:25.362 [2024-10-11 23:01:28.390992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:25.362 Malloc0 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:25.362 [2024-10-11 23:01:28.447152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.362 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=429605 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=429606 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=429609 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:25.363 { 00:38:25.363 "params": { 00:38:25.363 "name": "Nvme$subsystem", 00:38:25.363 "trtype": "$TEST_TRANSPORT", 00:38:25.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:25.363 "adrfam": "ipv4", 00:38:25.363 "trsvcid": "$NVMF_PORT", 00:38:25.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:25.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:25.363 "hdgst": ${hdgst:-false}, 00:38:25.363 "ddgst": ${ddgst:-false} 00:38:25.363 }, 00:38:25.363 "method": "bdev_nvme_attach_controller" 00:38:25.363 } 00:38:25.363 EOF 00:38:25.363 )") 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=429611 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:25.363 { 00:38:25.363 "params": { 00:38:25.363 "name": "Nvme$subsystem", 00:38:25.363 "trtype": "$TEST_TRANSPORT", 00:38:25.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:25.363 "adrfam": "ipv4", 00:38:25.363 "trsvcid": "$NVMF_PORT", 00:38:25.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:25.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:25.363 "hdgst": ${hdgst:-false}, 00:38:25.363 "ddgst": ${ddgst:-false} 00:38:25.363 }, 00:38:25.363 "method": "bdev_nvme_attach_controller" 00:38:25.363 } 00:38:25.363 EOF 00:38:25.363 )") 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:25.363 { 00:38:25.363 "params": { 00:38:25.363 "name": "Nvme$subsystem", 00:38:25.363 "trtype": "$TEST_TRANSPORT", 00:38:25.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:25.363 "adrfam": "ipv4", 00:38:25.363 "trsvcid": "$NVMF_PORT", 00:38:25.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:25.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:25.363 "hdgst": ${hdgst:-false}, 00:38:25.363 "ddgst": ${ddgst:-false} 00:38:25.363 }, 00:38:25.363 "method": "bdev_nvme_attach_controller" 00:38:25.363 } 00:38:25.363 EOF 00:38:25.363 )") 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:25.363 { 00:38:25.363 "params": { 00:38:25.363 "name": "Nvme$subsystem", 00:38:25.363 "trtype": "$TEST_TRANSPORT", 00:38:25.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:25.363 "adrfam": "ipv4", 00:38:25.363 "trsvcid": "$NVMF_PORT", 00:38:25.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:25.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:25.363 "hdgst": ${hdgst:-false}, 00:38:25.363 "ddgst": ${ddgst:-false} 00:38:25.363 }, 00:38:25.363 "method": "bdev_nvme_attach_controller" 00:38:25.363 } 00:38:25.363 EOF 00:38:25.363 )") 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 429605 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:25.363 "params": { 00:38:25.363 "name": "Nvme1", 00:38:25.363 "trtype": "tcp", 00:38:25.363 "traddr": "10.0.0.2", 00:38:25.363 "adrfam": "ipv4", 00:38:25.363 "trsvcid": "4420", 00:38:25.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:25.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:25.363 "hdgst": false, 00:38:25.363 "ddgst": false 00:38:25.363 }, 00:38:25.363 "method": "bdev_nvme_attach_controller" 00:38:25.363 }' 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:25.363 "params": { 00:38:25.363 "name": "Nvme1", 00:38:25.363 "trtype": "tcp", 00:38:25.363 "traddr": "10.0.0.2", 00:38:25.363 "adrfam": "ipv4", 00:38:25.363 "trsvcid": "4420", 00:38:25.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:25.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:25.363 "hdgst": false, 00:38:25.363 "ddgst": false 00:38:25.363 }, 00:38:25.363 "method": "bdev_nvme_attach_controller" 00:38:25.363 }' 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:25.363 "params": { 00:38:25.363 "name": "Nvme1", 00:38:25.363 "trtype": "tcp", 00:38:25.363 "traddr": "10.0.0.2", 00:38:25.363 "adrfam": "ipv4", 00:38:25.363 "trsvcid": "4420", 00:38:25.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:25.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:25.363 "hdgst": false, 00:38:25.363 "ddgst": false 00:38:25.363 }, 00:38:25.363 "method": "bdev_nvme_attach_controller" 00:38:25.363 }' 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:25.363 23:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:25.363 "params": { 00:38:25.363 "name": "Nvme1", 00:38:25.363 "trtype": "tcp", 00:38:25.363 "traddr": "10.0.0.2", 00:38:25.363 "adrfam": "ipv4", 00:38:25.363 "trsvcid": "4420", 00:38:25.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:25.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:25.363 "hdgst": false, 00:38:25.363 "ddgst": false 00:38:25.363 }, 00:38:25.363 "method": "bdev_nvme_attach_controller" 00:38:25.363 }' 00:38:25.363 [2024-10-11 23:01:28.498222] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:38:25.363 [2024-10-11 23:01:28.498246] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:38:25.363 [2024-10-11 23:01:28.498249] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:38:25.363 [2024-10-11 23:01:28.498245] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:38:25.363 [2024-10-11 23:01:28.498297] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:25.363 [2024-10-11 23:01:28.498336] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-11 23:01:28.498336] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:25.363 [2024-10-11 23:01:28.498339] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:38:25.364 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:25.622 [2024-10-11 23:01:28.674543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.622 [2024-10-11 23:01:28.716227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:25.622 [2024-10-11 23:01:28.773505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.622 [2024-10-11 23:01:28.815295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:25.622 [2024-10-11 23:01:28.869797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.893 [2024-10-11 23:01:28.914578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:25.893 [2024-10-11 23:01:28.944341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.893 [2024-10-11 23:01:28.983482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:25.893 Running I/O for 1 seconds... 00:38:25.893 Running I/O for 1 seconds... 00:38:25.893 Running I/O for 1 seconds... 00:38:26.155 Running I/O for 1 seconds... 00:38:27.089 6744.00 IOPS, 26.34 MiB/s [2024-10-11T21:01:30.357Z] 181600.00 IOPS, 709.38 MiB/s 00:38:27.089 Latency(us) 00:38:27.089 [2024-10-11T21:01:30.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.089 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:27.089 Nvme1n1 : 1.00 181257.45 708.04 0.00 0.00 702.40 297.34 1881.13 00:38:27.089 [2024-10-11T21:01:30.357Z] =================================================================================================================== 00:38:27.089 [2024-10-11T21:01:30.357Z] Total : 181257.45 708.04 0.00 0.00 702.40 297.34 1881.13 00:38:27.089 8390.00 IOPS, 32.77 MiB/s 00:38:27.089 Latency(us) 00:38:27.089 [2024-10-11T21:01:30.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.089 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:27.089 Nvme1n1 : 1.02 6764.48 26.42 0.00 0.00 18808.81 4417.61 35535.08 00:38:27.089 [2024-10-11T21:01:30.357Z] =================================================================================================================== 00:38:27.089 [2024-10-11T21:01:30.357Z] Total : 6764.48 26.42 0.00 0.00 18808.81 4417.61 35535.08 00:38:27.089 00:38:27.089 Latency(us) 00:38:27.089 [2024-10-11T21:01:30.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.089 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:27.089 Nvme1n1 : 1.01 8432.73 32.94 0.00 0.00 15096.01 4830.25 20583.16 00:38:27.089 [2024-10-11T21:01:30.357Z] =================================================================================================================== 00:38:27.089 [2024-10-11T21:01:30.357Z] Total : 8432.73 32.94 0.00 0.00 15096.01 4830.25 20583.16 00:38:27.089 6586.00 IOPS, 25.73 MiB/s 00:38:27.089 Latency(us) 00:38:27.089 [2024-10-11T21:01:30.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.089 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:27.089 Nvme1n1 : 1.01 6685.08 26.11 0.00 0.00 19081.46 4854.52 38836.15 00:38:27.089 [2024-10-11T21:01:30.357Z] =================================================================================================================== 00:38:27.089 [2024-10-11T21:01:30.357Z] Total : 6685.08 26.11 0.00 0.00 19081.46 4854.52 38836.15 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 429606 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 429609 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 429611 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:27.347 rmmod nvme_tcp 00:38:27.347 rmmod nvme_fabrics 00:38:27.347 rmmod nvme_keyring 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 429459 ']' 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 429459 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 429459 ']' 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 429459 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 429459 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:27.347 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 429459' 00:38:27.348 killing process with pid 429459 00:38:27.348 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 429459 00:38:27.348 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 429459 00:38:27.608 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:27.608 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:27.608 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:27.608 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:27.608 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:27.608 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:38:27.608 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:38:27.608 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:27.608 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:27.608 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.608 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:27.608 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:29.517 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:29.517 00:38:29.517 real 0m7.072s 00:38:29.517 user 0m14.009s 00:38:29.517 sys 0m3.822s 00:38:29.517 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:29.517 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:29.517 ************************************ 00:38:29.517 END TEST nvmf_bdev_io_wait 00:38:29.517 ************************************ 00:38:29.517 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:29.517 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:29.517 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:29.517 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:29.776 ************************************ 00:38:29.776 START TEST nvmf_queue_depth 00:38:29.776 ************************************ 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:29.776 * Looking for test storage... 00:38:29.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:29.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.776 --rc genhtml_branch_coverage=1 00:38:29.776 --rc genhtml_function_coverage=1 00:38:29.776 --rc genhtml_legend=1 00:38:29.776 --rc geninfo_all_blocks=1 00:38:29.776 --rc geninfo_unexecuted_blocks=1 00:38:29.776 00:38:29.776 ' 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:29.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.776 --rc genhtml_branch_coverage=1 00:38:29.776 --rc genhtml_function_coverage=1 00:38:29.776 --rc genhtml_legend=1 00:38:29.776 --rc geninfo_all_blocks=1 00:38:29.776 --rc geninfo_unexecuted_blocks=1 00:38:29.776 00:38:29.776 ' 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:29.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.776 --rc genhtml_branch_coverage=1 00:38:29.776 --rc genhtml_function_coverage=1 00:38:29.776 --rc genhtml_legend=1 00:38:29.776 --rc geninfo_all_blocks=1 00:38:29.776 --rc geninfo_unexecuted_blocks=1 00:38:29.776 00:38:29.776 ' 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:29.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.776 --rc genhtml_branch_coverage=1 00:38:29.776 --rc genhtml_function_coverage=1 00:38:29.776 --rc genhtml_legend=1 00:38:29.776 --rc geninfo_all_blocks=1 00:38:29.776 --rc geninfo_unexecuted_blocks=1 00:38:29.776 00:38:29.776 ' 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:29.776 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:29.777 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:32.312 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:32.313 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:32.313 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:32.313 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:32.313 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:32.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:32.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:38:32.313 00:38:32.313 --- 10.0.0.2 ping statistics --- 00:38:32.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:32.313 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:32.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:32.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:38:32.313 00:38:32.313 --- 10.0.0.1 ping statistics --- 00:38:32.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:32.313 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=431824 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 431824 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 431824 ']' 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:32.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:32.313 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:32.313 [2024-10-11 23:01:35.336413] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:32.314 [2024-10-11 23:01:35.337508] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:38:32.314 [2024-10-11 23:01:35.337588] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:32.314 [2024-10-11 23:01:35.407183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.314 [2024-10-11 23:01:35.449984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:32.314 [2024-10-11 23:01:35.450045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:32.314 [2024-10-11 23:01:35.450069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:32.314 [2024-10-11 23:01:35.450080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:32.314 [2024-10-11 23:01:35.450090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:32.314 [2024-10-11 23:01:35.450676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:32.314 [2024-10-11 23:01:35.528526] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:32.314 [2024-10-11 23:01:35.528847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:32.314 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:32.314 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:32.314 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:32.314 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:32.314 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:32.573 [2024-10-11 23:01:35.591261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:32.573 Malloc0 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:32.573 [2024-10-11 23:01:35.655493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=431852 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 431852 /var/tmp/bdevperf.sock 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 431852 ']' 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:32.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:32.573 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:32.573 [2024-10-11 23:01:35.705889] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:38:32.573 [2024-10-11 23:01:35.705951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431852 ] 00:38:32.573 [2024-10-11 23:01:35.763734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.573 [2024-10-11 23:01:35.808562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.836 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:32.836 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:32.837 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:32.837 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.837 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:33.096 NVMe0n1 00:38:33.096 23:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.096 23:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:33.096 Running I/O for 10 seconds... 00:38:35.404 8192.00 IOPS, 32.00 MiB/s [2024-10-11T21:01:39.606Z] 8316.50 IOPS, 32.49 MiB/s [2024-10-11T21:01:40.540Z] 8422.67 IOPS, 32.90 MiB/s [2024-10-11T21:01:41.474Z] 8449.25 IOPS, 33.00 MiB/s [2024-10-11T21:01:42.408Z] 8409.60 IOPS, 32.85 MiB/s [2024-10-11T21:01:43.342Z] 8484.50 IOPS, 33.14 MiB/s [2024-10-11T21:01:44.716Z] 8488.86 IOPS, 33.16 MiB/s [2024-10-11T21:01:45.651Z] 8549.38 IOPS, 33.40 MiB/s [2024-10-11T21:01:46.583Z] 8538.11 IOPS, 33.35 MiB/s [2024-10-11T21:01:46.583Z] 8580.30 IOPS, 33.52 MiB/s 00:38:43.315 Latency(us) 00:38:43.315 [2024-10-11T21:01:46.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.315 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:43.315 Verification LBA range: start 0x0 length 0x4000 00:38:43.315 NVMe0n1 : 10.14 8558.72 33.43 0.00 0.00 118597.16 21068.61 81944.27 00:38:43.315 [2024-10-11T21:01:46.583Z] =================================================================================================================== 00:38:43.315 [2024-10-11T21:01:46.583Z] Total : 8558.72 33.43 0.00 0.00 118597.16 21068.61 81944.27 00:38:43.315 { 00:38:43.315 "results": [ 00:38:43.315 { 00:38:43.315 "job": "NVMe0n1", 00:38:43.315 "core_mask": "0x1", 00:38:43.315 "workload": "verify", 00:38:43.315 "status": "finished", 00:38:43.315 "verify_range": { 00:38:43.315 "start": 0, 00:38:43.315 "length": 16384 00:38:43.315 }, 00:38:43.315 "queue_depth": 1024, 00:38:43.315 "io_size": 4096, 00:38:43.315 "runtime": 10.13937, 00:38:43.315 "iops": 8558.717158955635, 00:38:43.315 "mibps": 33.43248890217045, 00:38:43.315 "io_failed": 0, 00:38:43.315 "io_timeout": 0, 00:38:43.315 "avg_latency_us": 118597.16341071932, 00:38:43.315 "min_latency_us": 21068.61037037037, 00:38:43.315 "max_latency_us": 81944.27259259259 00:38:43.315 } 00:38:43.315 ], 00:38:43.315 "core_count": 1 00:38:43.315 } 00:38:43.315 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 431852 00:38:43.315 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 431852 ']' 00:38:43.315 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 431852 00:38:43.315 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:43.315 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:43.315 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 431852 00:38:43.315 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:43.315 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:43.315 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 431852' 00:38:43.315 killing process with pid 431852 00:38:43.315 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 431852 00:38:43.315 Received shutdown signal, test time was about 10.000000 seconds 00:38:43.315 00:38:43.315 Latency(us) 00:38:43.315 [2024-10-11T21:01:46.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.315 [2024-10-11T21:01:46.583Z] =================================================================================================================== 00:38:43.315 [2024-10-11T21:01:46.583Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:43.315 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 431852 00:38:43.572 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:43.573 rmmod nvme_tcp 00:38:43.573 rmmod nvme_fabrics 00:38:43.573 rmmod nvme_keyring 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 431824 ']' 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 431824 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 431824 ']' 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 431824 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 431824 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 431824' 00:38:43.573 killing process with pid 431824 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 431824 00:38:43.573 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 431824 00:38:43.831 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:43.831 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:43.831 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:43.831 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:43.831 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:38:43.831 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:43.832 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:38:43.832 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:43.832 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:43.832 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.832 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.832 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:46.367 00:38:46.367 real 0m16.300s 00:38:46.367 user 0m22.501s 00:38:46.367 sys 0m3.484s 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:46.367 ************************************ 00:38:46.367 END TEST nvmf_queue_depth 00:38:46.367 ************************************ 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:46.367 ************************************ 00:38:46.367 START TEST nvmf_target_multipath 00:38:46.367 ************************************ 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:46.367 * Looking for test storage... 00:38:46.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:46.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.367 --rc genhtml_branch_coverage=1 00:38:46.367 --rc genhtml_function_coverage=1 00:38:46.367 --rc genhtml_legend=1 00:38:46.367 --rc geninfo_all_blocks=1 00:38:46.367 --rc geninfo_unexecuted_blocks=1 00:38:46.367 00:38:46.367 ' 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:46.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.367 --rc genhtml_branch_coverage=1 00:38:46.367 --rc genhtml_function_coverage=1 00:38:46.367 --rc genhtml_legend=1 00:38:46.367 --rc geninfo_all_blocks=1 00:38:46.367 --rc geninfo_unexecuted_blocks=1 00:38:46.367 00:38:46.367 ' 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:46.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.367 --rc genhtml_branch_coverage=1 00:38:46.367 --rc genhtml_function_coverage=1 00:38:46.367 --rc genhtml_legend=1 00:38:46.367 --rc geninfo_all_blocks=1 00:38:46.367 --rc geninfo_unexecuted_blocks=1 00:38:46.367 00:38:46.367 ' 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:46.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.367 --rc genhtml_branch_coverage=1 00:38:46.367 --rc genhtml_function_coverage=1 00:38:46.367 --rc genhtml_legend=1 00:38:46.367 --rc geninfo_all_blocks=1 00:38:46.367 --rc geninfo_unexecuted_blocks=1 00:38:46.367 00:38:46.367 ' 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:46.367 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:46.368 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:48.270 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:48.270 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:48.270 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:48.270 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:48.270 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:48.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:48.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:38:48.271 00:38:48.271 --- 10.0.0.2 ping statistics --- 00:38:48.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.271 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:48.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:48.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:38:48.271 00:38:48.271 --- 10.0.0.1 ping statistics --- 00:38:48.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.271 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:48.271 only one NIC for nvmf test 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:48.271 rmmod nvme_tcp 00:38:48.271 rmmod nvme_fabrics 00:38:48.271 rmmod nvme_keyring 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:48.271 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.813 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:50.813 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:50.813 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:50.813 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:50.813 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:50.813 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:50.813 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:50.813 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:50.813 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:50.813 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:50.813 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:50.814 00:38:50.814 real 0m4.441s 00:38:50.814 user 0m0.887s 00:38:50.814 sys 0m1.561s 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:50.814 ************************************ 00:38:50.814 END TEST nvmf_target_multipath 00:38:50.814 ************************************ 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:50.814 ************************************ 00:38:50.814 START TEST nvmf_zcopy 00:38:50.814 ************************************ 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:50.814 * Looking for test storage... 00:38:50.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:50.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.814 --rc genhtml_branch_coverage=1 00:38:50.814 --rc genhtml_function_coverage=1 00:38:50.814 --rc genhtml_legend=1 00:38:50.814 --rc geninfo_all_blocks=1 00:38:50.814 --rc geninfo_unexecuted_blocks=1 00:38:50.814 00:38:50.814 ' 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:50.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.814 --rc genhtml_branch_coverage=1 00:38:50.814 --rc genhtml_function_coverage=1 00:38:50.814 --rc genhtml_legend=1 00:38:50.814 --rc geninfo_all_blocks=1 00:38:50.814 --rc geninfo_unexecuted_blocks=1 00:38:50.814 00:38:50.814 ' 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:50.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.814 --rc genhtml_branch_coverage=1 00:38:50.814 --rc genhtml_function_coverage=1 00:38:50.814 --rc genhtml_legend=1 00:38:50.814 --rc geninfo_all_blocks=1 00:38:50.814 --rc geninfo_unexecuted_blocks=1 00:38:50.814 00:38:50.814 ' 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:50.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.814 --rc genhtml_branch_coverage=1 00:38:50.814 --rc genhtml_function_coverage=1 00:38:50.814 --rc genhtml_legend=1 00:38:50.814 --rc geninfo_all_blocks=1 00:38:50.814 --rc geninfo_unexecuted_blocks=1 00:38:50.814 00:38:50.814 ' 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.814 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:50.815 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:52.808 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:52.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:52.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:52.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:52.809 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:52.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:52.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:38:52.809 00:38:52.809 --- 10.0.0.2 ping statistics --- 00:38:52.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.809 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:52.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:52.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:38:52.809 00:38:52.809 --- 10.0.0.1 ping statistics --- 00:38:52.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.809 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=437020 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 437020 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 437020 ']' 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:52.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:52.809 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:52.809 [2024-10-11 23:01:55.908869] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:52.809 [2024-10-11 23:01:55.909996] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:38:52.809 [2024-10-11 23:01:55.910067] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:52.809 [2024-10-11 23:01:55.977591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.809 [2024-10-11 23:01:56.020496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:52.809 [2024-10-11 23:01:56.020580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:52.809 [2024-10-11 23:01:56.020596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:52.809 [2024-10-11 23:01:56.020608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:52.809 [2024-10-11 23:01:56.020618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:52.809 [2024-10-11 23:01:56.021188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.083 [2024-10-11 23:01:56.105733] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:53.083 [2024-10-11 23:01:56.106031] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:53.083 [2024-10-11 23:01:56.173790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:53.083 [2024-10-11 23:01:56.189981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:53.083 malloc0 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:53.083 { 00:38:53.083 "params": { 00:38:53.083 "name": "Nvme$subsystem", 00:38:53.083 "trtype": "$TEST_TRANSPORT", 00:38:53.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:53.083 "adrfam": "ipv4", 00:38:53.083 "trsvcid": "$NVMF_PORT", 00:38:53.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:53.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:53.083 "hdgst": ${hdgst:-false}, 00:38:53.083 "ddgst": ${ddgst:-false} 00:38:53.083 }, 00:38:53.083 "method": "bdev_nvme_attach_controller" 00:38:53.083 } 00:38:53.083 EOF 00:38:53.083 )") 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:38:53.083 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:53.083 "params": { 00:38:53.083 "name": "Nvme1", 00:38:53.083 "trtype": "tcp", 00:38:53.083 "traddr": "10.0.0.2", 00:38:53.083 "adrfam": "ipv4", 00:38:53.083 "trsvcid": "4420", 00:38:53.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:53.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:53.084 "hdgst": false, 00:38:53.084 "ddgst": false 00:38:53.084 }, 00:38:53.084 "method": "bdev_nvme_attach_controller" 00:38:53.084 }' 00:38:53.084 [2024-10-11 23:01:56.274757] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:38:53.084 [2024-10-11 23:01:56.274839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437051 ] 00:38:53.084 [2024-10-11 23:01:56.333589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.341 [2024-10-11 23:01:56.381978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.341 Running I/O for 10 seconds... 00:38:55.646 5553.00 IOPS, 43.38 MiB/s [2024-10-11T21:01:59.847Z] 5563.50 IOPS, 43.46 MiB/s [2024-10-11T21:02:00.779Z] 5566.00 IOPS, 43.48 MiB/s [2024-10-11T21:02:01.712Z] 5557.75 IOPS, 43.42 MiB/s [2024-10-11T21:02:02.644Z] 5567.40 IOPS, 43.50 MiB/s [2024-10-11T21:02:04.017Z] 5567.50 IOPS, 43.50 MiB/s [2024-10-11T21:02:04.950Z] 5572.57 IOPS, 43.54 MiB/s [2024-10-11T21:02:05.882Z] 5573.75 IOPS, 43.54 MiB/s [2024-10-11T21:02:06.812Z] 5576.67 IOPS, 43.57 MiB/s [2024-10-11T21:02:06.812Z] 5579.10 IOPS, 43.59 MiB/s 00:39:03.544 Latency(us) 00:39:03.544 [2024-10-11T21:02:06.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.544 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:03.544 Verification LBA range: start 0x0 length 0x1000 00:39:03.544 Nvme1n1 : 10.02 5581.75 43.61 0.00 0.00 22869.35 3665.16 29515.47 00:39:03.544 [2024-10-11T21:02:06.812Z] =================================================================================================================== 00:39:03.544 [2024-10-11T21:02:06.812Z] Total : 5581.75 43.61 0.00 0.00 22869.35 3665.16 29515.47 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=438227 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:03.803 { 00:39:03.803 "params": { 00:39:03.803 "name": "Nvme$subsystem", 00:39:03.803 "trtype": "$TEST_TRANSPORT", 00:39:03.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:03.803 "adrfam": "ipv4", 00:39:03.803 "trsvcid": "$NVMF_PORT", 00:39:03.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:03.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:03.803 "hdgst": ${hdgst:-false}, 00:39:03.803 "ddgst": ${ddgst:-false} 00:39:03.803 }, 00:39:03.803 "method": "bdev_nvme_attach_controller" 00:39:03.803 } 00:39:03.803 EOF 00:39:03.803 )") 00:39:03.803 [2024-10-11 23:02:06.841766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.841811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:39:03.803 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:03.803 "params": { 00:39:03.803 "name": "Nvme1", 00:39:03.803 "trtype": "tcp", 00:39:03.803 "traddr": "10.0.0.2", 00:39:03.803 "adrfam": "ipv4", 00:39:03.803 "trsvcid": "4420", 00:39:03.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:03.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:03.803 "hdgst": false, 00:39:03.803 "ddgst": false 00:39:03.803 }, 00:39:03.803 "method": "bdev_nvme_attach_controller" 00:39:03.803 }' 00:39:03.803 [2024-10-11 23:02:06.849673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.849698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.857671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.857694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.865671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.865694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.873669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.873691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.881678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.881699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.888071] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:39:03.803 [2024-10-11 23:02:06.888162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438227 ] 00:39:03.803 [2024-10-11 23:02:06.889679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.889700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.897680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.897700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.905666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.905687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.913664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.913685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.921688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.921709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.929687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.929708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.937669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.937690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.945681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.803 [2024-10-11 23:02:06.945702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.803 [2024-10-11 23:02:06.948973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.804 [2024-10-11 23:02:06.953679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:06.953704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:06.961724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:06.961761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:06.969700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:06.969729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:06.977671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:06.977694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:06.985669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:06.985691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:06.993668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:06.993690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:06.998589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.804 [2024-10-11 23:02:07.001668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:07.001689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:07.009667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:07.009688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:07.017724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:07.017763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:07.025724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:07.025762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:07.033719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:07.033760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:07.041738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:07.041778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:07.049734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:07.049774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:07.057717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:07.057757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.804 [2024-10-11 23:02:07.065680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.804 [2024-10-11 23:02:07.065705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.073740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.073781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.081727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.081769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.089723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.089762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.097667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.097690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.105682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.105708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.113675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.113702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.121690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.121714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.129673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.129698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.137670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.137694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.145667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.145690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.153668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.153690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.161667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.161689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.169664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.169685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.177684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.177707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.185763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.185792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.193684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.193708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.201670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.201693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.209756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.209783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 Running I/O for 5 seconds... 00:39:04.062 [2024-10-11 23:02:07.217691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.217716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.234941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.234971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.246523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.246548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.257465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.257499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.268929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.268955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.283264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.283305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.292966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.293005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.304932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.304959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.319616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.319658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.062 [2024-10-11 23:02:07.329725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.062 [2024-10-11 23:02:07.329752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.321 [2024-10-11 23:02:07.342166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.321 [2024-10-11 23:02:07.342194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.321 [2024-10-11 23:02:07.352069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.321 [2024-10-11 23:02:07.352096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.321 [2024-10-11 23:02:07.363973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.321 [2024-10-11 23:02:07.364001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.321 [2024-10-11 23:02:07.379431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.321 [2024-10-11 23:02:07.379459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.321 [2024-10-11 23:02:07.389258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.321 [2024-10-11 23:02:07.389285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.321 [2024-10-11 23:02:07.401097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.321 [2024-10-11 23:02:07.401124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.321 [2024-10-11 23:02:07.414071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.321 [2024-10-11 23:02:07.414098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.321 [2024-10-11 23:02:07.424030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.321 [2024-10-11 23:02:07.424055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.321 [2024-10-11 23:02:07.438649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.321 [2024-10-11 23:02:07.438675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.321 [2024-10-11 23:02:07.448759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.321 [2024-10-11 23:02:07.448785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.321 [2024-10-11 23:02:07.460680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.321 [2024-10-11 23:02:07.460707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.321 [2024-10-11 23:02:07.476449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.322 [2024-10-11 23:02:07.476488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.322 [2024-10-11 23:02:07.489906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.322 [2024-10-11 23:02:07.489939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.322 [2024-10-11 23:02:07.500197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.322 [2024-10-11 23:02:07.500238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.322 [2024-10-11 23:02:07.512302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.322 [2024-10-11 23:02:07.512329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.322 [2024-10-11 23:02:07.526584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.322 [2024-10-11 23:02:07.526612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.322 [2024-10-11 23:02:07.536007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.322 [2024-10-11 23:02:07.536033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.322 [2024-10-11 23:02:07.548275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.322 [2024-10-11 23:02:07.548319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.322 [2024-10-11 23:02:07.562831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.322 [2024-10-11 23:02:07.562858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.322 [2024-10-11 23:02:07.572404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.322 [2024-10-11 23:02:07.572431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.322 [2024-10-11 23:02:07.584835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.322 [2024-10-11 23:02:07.584862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.598253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.598281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.607721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.607748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.623057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.623082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.633877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.633920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.644966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.644992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.658856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.658897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.668070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.668097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.680124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.680151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.695547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.695584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.705221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.705248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.717455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.717491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.728591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.728619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.741739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.741767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.751167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.751195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.763063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.763091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.774284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.774310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.784414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.784441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.796702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.796730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.811086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.811129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.820699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.820726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.832313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.832339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.580 [2024-10-11 23:02:07.843226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.580 [2024-10-11 23:02:07.843254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:07.854605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:07.854633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:07.865350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:07.865377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:07.876380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:07.876407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:07.890793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:07.890822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:07.900269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:07.900294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:07.912677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:07.912708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:07.928665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:07.928692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:07.943105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:07.943145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:07.952304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:07.952332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:07.964425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:07.964453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:07.979504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:07.979532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:07.989986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:07.990013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:08.000860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:08.000885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:08.015054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:08.015082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:08.024919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:08.024946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:08.037013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:08.037040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:08.050714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:08.050743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:08.060176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:08.060203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:08.072624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:08.072666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:08.087701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:08.087728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.839 [2024-10-11 23:02:08.102993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.839 [2024-10-11 23:02:08.103021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.097 [2024-10-11 23:02:08.112400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.097 [2024-10-11 23:02:08.112428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.097 [2024-10-11 23:02:08.124561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.097 [2024-10-11 23:02:08.124589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.097 [2024-10-11 23:02:08.138707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.097 [2024-10-11 23:02:08.138734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.148125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.148153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.160455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.160481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.174869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.174906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.184985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.185011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.196856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.196883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.210141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.210183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.220214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.220240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 11421.00 IOPS, 89.23 MiB/s [2024-10-11T21:02:08.366Z] [2024-10-11 23:02:08.232432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.232473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.243324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.243351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.254748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.254775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.271051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.271079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.281623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.281651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.293081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.293105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.308524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.308557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.322213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.322242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.331803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.331830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.346632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.346659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.098 [2024-10-11 23:02:08.357018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.098 [2024-10-11 23:02:08.357044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.369373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.369401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.379648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.379674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.391888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.391914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.402681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.402709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.413935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.413975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.424880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.424908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.438253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.438281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.448337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.448364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.460793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.460820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.474135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.474178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.483709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.483735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.498663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.498691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.509470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.509494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.520226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.520253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.535323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.535349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.544727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.544752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.556571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.556598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.571126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.571152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.580474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.580502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.592454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.356 [2024-10-11 23:02:08.592479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.356 [2024-10-11 23:02:08.606800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.357 [2024-10-11 23:02:08.606842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.357 [2024-10-11 23:02:08.615576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.357 [2024-10-11 23:02:08.615602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.630997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.631023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.641888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.641915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.652774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.652815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.663531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.663578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.679743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.679784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.690628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.690655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.701191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.701218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.713810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.713837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.723231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.723273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.735643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.735669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.747008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.747048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.757631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.757671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.768670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.768697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.784079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.784107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.798338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.798364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.807732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.807759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.822855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.822895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.833367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.833393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.844392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.844427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.859751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.859778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.615 [2024-10-11 23:02:08.873963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.615 [2024-10-11 23:02:08.874005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.874 [2024-10-11 23:02:08.884860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.874 [2024-10-11 23:02:08.884885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.874 [2024-10-11 23:02:08.899242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.874 [2024-10-11 23:02:08.899283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.874 [2024-10-11 23:02:08.908660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.874 [2024-10-11 23:02:08.908687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.874 [2024-10-11 23:02:08.920850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.874 [2024-10-11 23:02:08.920877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.874 [2024-10-11 23:02:08.935820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:08.935847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:08.945476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:08.945503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:08.958141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:08.958181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:08.968163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:08.968190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:08.980633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:08.980659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:08.995271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:08.995296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:09.004629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:09.004656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:09.016854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:09.016880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:09.031952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:09.031979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:09.046671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:09.046697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:09.056401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:09.056427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:09.068561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:09.068601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:09.084343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:09.084374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:09.099386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:09.099414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:09.108771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:09.108798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:09.121350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:09.121375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.875 [2024-10-11 23:02:09.134602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.875 [2024-10-11 23:02:09.134629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.144015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.144057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.156623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.156649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.172391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.172418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.185010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.185037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.197945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.197970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.207539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.207574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.222611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.222636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 11440.00 IOPS, 89.38 MiB/s [2024-10-11T21:02:09.401Z] [2024-10-11 23:02:09.233651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.233677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.244397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.244424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.258756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.258782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.268465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.268505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.280689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.280716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.296367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.296409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.310537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.310572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.320308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.320340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.332679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.332707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.347993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.348019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.363468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.363496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.372864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.372892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.384721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.384749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.133 [2024-10-11 23:02:09.397561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.133 [2024-10-11 23:02:09.397591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.407672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.407709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.422807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.422834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.432582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.432622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.444527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.444576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.459981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.460026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.474647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.474675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.483997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.484035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.495805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.495843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.506706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.506732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.517577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.517606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.528850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.528877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.543941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.543968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.556751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.556778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.566481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.566506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.580469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.580493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.595390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.595417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.604427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.604468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.616491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.616532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.630797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.630824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.640342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.391 [2024-10-11 23:02:09.640369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.391 [2024-10-11 23:02:09.652683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.392 [2024-10-11 23:02:09.652709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.668720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.668747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.682776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.682803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.692624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.692651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.704600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.704627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.720179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.720203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.734058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.734098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.744158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.744185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.756379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.756407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.770321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.770349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.779714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.779755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.794691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.794717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.805608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.805649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.816604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.816631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.832024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.832051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.847283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.847309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.857401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.857426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.869512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.869542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.883016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.883043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.893197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.893223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.905225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.905253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.650 [2024-10-11 23:02:09.916149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.650 [2024-10-11 23:02:09.916190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:09.931674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:09.931702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:09.947668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:09.947696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:09.957618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:09.957645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:09.968531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:09.968567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:09.981522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:09.981557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:09.991265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:09.991290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.004025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.004053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.015202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.015228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.026182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.026211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.037232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.037260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.048537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.048574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.061862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.061890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.071862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.071889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.083774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.083802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.097768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.097795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.108241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.108268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.120651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.120679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.137546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.137582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.148369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.148408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.163015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.163043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.909 [2024-10-11 23:02:10.173241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.909 [2024-10-11 23:02:10.173269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.167 [2024-10-11 23:02:10.185789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.167 [2024-10-11 23:02:10.185837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.167 [2024-10-11 23:02:10.197412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.167 [2024-10-11 23:02:10.197438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.167 [2024-10-11 23:02:10.210639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.167 [2024-10-11 23:02:10.210668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.167 [2024-10-11 23:02:10.220119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.167 [2024-10-11 23:02:10.220146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.167 11421.67 IOPS, 89.23 MiB/s [2024-10-11T21:02:10.435Z] [2024-10-11 23:02:10.232224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.167 [2024-10-11 23:02:10.232267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.167 [2024-10-11 23:02:10.245546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.167 [2024-10-11 23:02:10.245609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.167 [2024-10-11 23:02:10.254871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.167 [2024-10-11 23:02:10.254900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.167 [2024-10-11 23:02:10.267528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.167 [2024-10-11 23:02:10.267581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.279035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.279060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.290492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.290533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.301785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.301811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.313125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.313153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.325067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.325107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.338573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.338601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.348318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.348345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.360511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.360539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.374974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.375002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.384575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.384615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.396547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.396600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.410690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.410718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.420418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.420459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.168 [2024-10-11 23:02:10.432607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.168 [2024-10-11 23:02:10.432634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.425 [2024-10-11 23:02:10.447971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.425 [2024-10-11 23:02:10.447998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.425 [2024-10-11 23:02:10.463228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.463256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.473427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.473461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.485908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.485934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.497520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.497572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.508973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.509001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.521051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.521075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.535657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.535683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.545200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.545226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.557112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.557140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.571254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.571280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.580858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.580886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.593205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.593234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.604713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.604741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.617899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.617926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.627645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.627673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.643149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.643176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.654314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.654342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.671383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.671409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.682121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.682145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.426 [2024-10-11 23:02:10.693813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.426 [2024-10-11 23:02:10.693840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.684 [2024-10-11 23:02:10.705652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.705685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.719940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.719981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.734761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.734789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.744332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.744359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.756230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.756258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.770676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.770704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.780068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.780093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.794590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.794617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.803950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.803974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.816035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.816063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.830852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.830879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.840952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.840977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.853312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.853338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.866668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.866705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.876064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.876089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.888065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.888093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.903770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.903798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.918799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.918837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.928570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.928605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.685 [2024-10-11 23:02:10.941002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.685 [2024-10-11 23:02:10.941036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:10.954214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:10.954242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:10.964481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:10.964507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:10.976579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:10.976604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:10.991033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:10.991059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.000653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.000680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.013074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.013113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.027294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.027332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.036746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.036771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.048650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.048692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.061579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.061608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.070951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.070975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.087067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.087094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.097941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.097968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.109379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.109403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.123947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.123986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.133221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.133262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.145325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.145352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.156517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.156542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.169508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.169535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.179071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.179096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.195064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.195089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.943 [2024-10-11 23:02:11.205726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.943 [2024-10-11 23:02:11.205751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.219305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.219332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 11403.75 IOPS, 89.09 MiB/s [2024-10-11T21:02:11.469Z] [2024-10-11 23:02:11.230662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.230688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.246959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.246987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.256259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.256283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.267977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.268001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.283162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.283190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.292015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.292041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.304260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.304287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.318132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.318171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.327904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.327931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.340028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.340054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.350780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.350807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.361605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.361630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.372616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.372643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.387188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.387215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.397144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.397168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.409168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.201 [2024-10-11 23:02:11.409206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.201 [2024-10-11 23:02:11.423252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.202 [2024-10-11 23:02:11.423278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.202 [2024-10-11 23:02:11.432927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.202 [2024-10-11 23:02:11.432952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.202 [2024-10-11 23:02:11.445074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.202 [2024-10-11 23:02:11.445098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.202 [2024-10-11 23:02:11.456252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.202 [2024-10-11 23:02:11.456279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.202 [2024-10-11 23:02:11.467689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.202 [2024-10-11 23:02:11.467716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.478434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.478459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.489480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.489506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.500968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.501007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.514010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.514037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.523584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.523610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.538612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.538640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.549747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.549774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.561282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.561307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.573971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.573997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.583772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.583798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.598524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.598556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.609644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.609670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.620727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.620754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.634006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.634033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.643468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.643494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.655642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.655683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.666439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.666466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.677604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.677631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.688364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.688391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.700154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.700179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.711469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.711497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.460 [2024-10-11 23:02:11.722611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.460 [2024-10-11 23:02:11.722639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.718 [2024-10-11 23:02:11.734123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.718 [2024-10-11 23:02:11.734149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.718 [2024-10-11 23:02:11.744645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.718 [2024-10-11 23:02:11.744672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.757047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.757072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.770619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.770645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.780003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.780029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.792418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.792458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.807316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.807341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.817666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.817693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.829582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.829618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.843007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.843031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.852524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.852571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.864083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.864110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.879429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.879455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.888767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.888793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.900840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.900867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.915432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.915460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.925305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.925332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.937317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.937344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.948222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.948250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.964289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.964316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.719 [2024-10-11 23:02:11.978873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.719 [2024-10-11 23:02:11.978900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.977 [2024-10-11 23:02:11.988585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.977 [2024-10-11 23:02:11.988627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.977 [2024-10-11 23:02:12.000944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.977 [2024-10-11 23:02:12.000971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.977 [2024-10-11 23:02:12.015904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.977 [2024-10-11 23:02:12.015931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.977 [2024-10-11 23:02:12.030578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.977 [2024-10-11 23:02:12.030606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.977 [2024-10-11 23:02:12.040065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.977 [2024-10-11 23:02:12.040089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.977 [2024-10-11 23:02:12.055181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.977 [2024-10-11 23:02:12.055205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.977 [2024-10-11 23:02:12.066123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.977 [2024-10-11 23:02:12.066159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.977 [2024-10-11 23:02:12.077374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.977 [2024-10-11 23:02:12.077413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.977 [2024-10-11 23:02:12.090650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.977 [2024-10-11 23:02:12.090677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 [2024-10-11 23:02:12.100205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.100231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 [2024-10-11 23:02:12.115623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.115648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 [2024-10-11 23:02:12.126036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.126063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 [2024-10-11 23:02:12.137377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.137402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 [2024-10-11 23:02:12.148432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.148459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 [2024-10-11 23:02:12.163794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.163820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 [2024-10-11 23:02:12.179911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.179939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 [2024-10-11 23:02:12.189855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.189881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 [2024-10-11 23:02:12.201805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.201831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 [2024-10-11 23:02:12.212901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.212928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 [2024-10-11 23:02:12.228414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.228442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 11409.00 IOPS, 89.13 MiB/s 00:39:08.978 Latency(us) 00:39:08.978 [2024-10-11T21:02:12.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.978 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:08.978 Nvme1n1 : 5.01 11416.16 89.19 0.00 0.00 11199.40 3009.80 18058.81 00:39:08.978 [2024-10-11T21:02:12.246Z] =================================================================================================================== 00:39:08.978 [2024-10-11T21:02:12.246Z] Total : 11416.16 89.19 0.00 0.00 11199.40 3009.80 18058.81 00:39:08.978 [2024-10-11 23:02:12.240698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.240726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.978 [2024-10-11 23:02:12.245675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.978 [2024-10-11 23:02:12.245702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.253685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.253711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.261742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.261791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.269737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.269783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.277730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.277776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.285731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.285778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.293730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.293777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.301738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.301783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.309731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.309775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.317737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.317782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.325745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.325793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.333754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.333803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.341737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.341783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.349740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.349786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.357738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.357783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.365737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.365782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.373727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.373769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.381675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.381698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.389731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.389776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.397732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.397777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.405758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.405805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.413671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.413694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.421681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.421703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 [2024-10-11 23:02:12.429669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:09.238 [2024-10-11 23:02:12.429691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (438227) - No such process 00:39:09.238 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 438227 00:39:09.238 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.238 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.238 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:09.238 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.238 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:09.238 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.238 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:09.238 delay0 00:39:09.238 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.239 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:09.239 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.239 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:09.239 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.239 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:09.239 [2024-10-11 23:02:12.499711] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:17.366 Initializing NVMe Controllers 00:39:17.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:17.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:17.366 Initialization complete. Launching workers. 00:39:17.366 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 221, failed: 26648 00:39:17.366 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26722, failed to submit 147 00:39:17.366 success 26656, unsuccessful 66, failed 0 00:39:17.366 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:17.366 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:17.367 rmmod nvme_tcp 00:39:17.367 rmmod nvme_fabrics 00:39:17.367 rmmod nvme_keyring 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 437020 ']' 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 437020 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 437020 ']' 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 437020 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 437020 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 437020' 00:39:17.367 killing process with pid 437020 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 437020 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 437020 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:17.367 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.743 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:18.743 00:39:18.743 real 0m28.355s 00:39:18.743 user 0m40.507s 00:39:18.743 sys 0m9.897s 00:39:18.744 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:18.744 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:18.744 ************************************ 00:39:18.744 END TEST nvmf_zcopy 00:39:18.744 ************************************ 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:19.002 ************************************ 00:39:19.002 START TEST nvmf_nmic 00:39:19.002 ************************************ 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:19.002 * Looking for test storage... 00:39:19.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:19.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.002 --rc genhtml_branch_coverage=1 00:39:19.002 --rc genhtml_function_coverage=1 00:39:19.002 --rc genhtml_legend=1 00:39:19.002 --rc geninfo_all_blocks=1 00:39:19.002 --rc geninfo_unexecuted_blocks=1 00:39:19.002 00:39:19.002 ' 00:39:19.002 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:19.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.003 --rc genhtml_branch_coverage=1 00:39:19.003 --rc genhtml_function_coverage=1 00:39:19.003 --rc genhtml_legend=1 00:39:19.003 --rc geninfo_all_blocks=1 00:39:19.003 --rc geninfo_unexecuted_blocks=1 00:39:19.003 00:39:19.003 ' 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:19.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.003 --rc genhtml_branch_coverage=1 00:39:19.003 --rc genhtml_function_coverage=1 00:39:19.003 --rc genhtml_legend=1 00:39:19.003 --rc geninfo_all_blocks=1 00:39:19.003 --rc geninfo_unexecuted_blocks=1 00:39:19.003 00:39:19.003 ' 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:19.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.003 --rc genhtml_branch_coverage=1 00:39:19.003 --rc genhtml_function_coverage=1 00:39:19.003 --rc genhtml_legend=1 00:39:19.003 --rc geninfo_all_blocks=1 00:39:19.003 --rc geninfo_unexecuted_blocks=1 00:39:19.003 00:39:19.003 ' 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:19.003 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:21.540 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:21.540 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:21.540 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:21.540 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:21.540 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:21.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:21.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:39:21.541 00:39:21.541 --- 10.0.0.2 ping statistics --- 00:39:21.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.541 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:21.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:21.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:39:21.541 00:39:21.541 --- 10.0.0.1 ping statistics --- 00:39:21.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.541 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=441725 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 441725 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 441725 ']' 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:21.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:21.541 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.541 [2024-10-11 23:02:24.687631] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:21.541 [2024-10-11 23:02:24.688749] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:39:21.541 [2024-10-11 23:02:24.688804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:21.541 [2024-10-11 23:02:24.756386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:21.800 [2024-10-11 23:02:24.807829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:21.800 [2024-10-11 23:02:24.807900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:21.800 [2024-10-11 23:02:24.807914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:21.800 [2024-10-11 23:02:24.807925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:21.801 [2024-10-11 23:02:24.807935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:21.801 [2024-10-11 23:02:24.809586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:21.801 [2024-10-11 23:02:24.809649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:21.801 [2024-10-11 23:02:24.809714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:21.801 [2024-10-11 23:02:24.809716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.801 [2024-10-11 23:02:24.903573] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:21.801 [2024-10-11 23:02:24.903824] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:21.801 [2024-10-11 23:02:24.904055] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:21.801 [2024-10-11 23:02:24.904722] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:21.801 [2024-10-11 23:02:24.904961] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.801 [2024-10-11 23:02:24.954416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.801 Malloc0 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:21.801 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.801 [2024-10-11 23:02:25.022721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:21.801 test case1: single bdev can't be used in multiple subsystems 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.801 [2024-10-11 23:02:25.046364] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:21.801 [2024-10-11 23:02:25.046403] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:21.801 [2024-10-11 23:02:25.046422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.801 request: 00:39:21.801 { 00:39:21.801 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:21.801 "namespace": { 00:39:21.801 "bdev_name": "Malloc0", 00:39:21.801 "no_auto_visible": false 00:39:21.801 }, 00:39:21.801 "method": "nvmf_subsystem_add_ns", 00:39:21.801 "req_id": 1 00:39:21.801 } 00:39:21.801 Got JSON-RPC error response 00:39:21.801 response: 00:39:21.801 { 00:39:21.801 "code": -32602, 00:39:21.801 "message": "Invalid parameters" 00:39:21.801 } 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:21.801 Adding namespace failed - expected result. 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:21.801 test case2: host connect to nvmf target in multiple paths 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:21.801 [2024-10-11 23:02:25.054454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.801 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:22.062 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:22.320 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:22.320 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:39:22.320 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:22.320 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:22.320 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:39:24.863 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:24.863 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:24.863 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:24.863 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:39:24.863 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:24.863 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:39:24.863 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:24.863 [global] 00:39:24.863 thread=1 00:39:24.863 invalidate=1 00:39:24.863 rw=write 00:39:24.863 time_based=1 00:39:24.863 runtime=1 00:39:24.863 ioengine=libaio 00:39:24.863 direct=1 00:39:24.863 bs=4096 00:39:24.863 iodepth=1 00:39:24.863 norandommap=0 00:39:24.863 numjobs=1 00:39:24.863 00:39:24.863 verify_dump=1 00:39:24.863 verify_backlog=512 00:39:24.863 verify_state_save=0 00:39:24.863 do_verify=1 00:39:24.863 verify=crc32c-intel 00:39:24.863 [job0] 00:39:24.863 filename=/dev/nvme0n1 00:39:24.863 Could not set queue depth (nvme0n1) 00:39:24.863 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:24.863 fio-3.35 00:39:24.863 Starting 1 thread 00:39:25.799 00:39:25.799 job0: (groupid=0, jobs=1): err= 0: pid=442187: Fri Oct 11 23:02:28 2024 00:39:25.799 read: IOPS=2512, BW=9.81MiB/s (10.3MB/s)(9.82MiB/1001msec) 00:39:25.799 slat (nsec): min=4418, max=32947, avg=5310.78, stdev=1983.83 00:39:25.799 clat (usec): min=196, max=382, avg=219.34, stdev=10.38 00:39:25.799 lat (usec): min=202, max=387, avg=224.66, stdev=10.52 00:39:25.799 clat percentiles (usec): 00:39:25.799 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 212], 00:39:25.799 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 221], 00:39:25.799 | 70.00th=[ 223], 80.00th=[ 227], 90.00th=[ 231], 95.00th=[ 237], 00:39:25.799 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 318], 99.95th=[ 383], 00:39:25.799 | 99.99th=[ 383] 00:39:25.799 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:39:25.799 slat (nsec): min=5694, max=30254, avg=6845.29, stdev=2122.01 00:39:25.799 clat (usec): min=138, max=394, avg=159.53, stdev=18.32 00:39:25.799 lat (usec): min=144, max=400, avg=166.38, stdev=18.51 00:39:25.799 clat percentiles (usec): 00:39:25.799 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 145], 20.00th=[ 147], 00:39:25.799 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:39:25.799 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 188], 95.00th=[ 194], 00:39:25.799 | 99.00th=[ 212], 99.50th=[ 253], 99.90th=[ 277], 99.95th=[ 338], 00:39:25.799 | 99.99th=[ 396] 00:39:25.799 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:39:25.799 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:39:25.799 lat (usec) : 250=99.25%, 500=0.75% 00:39:25.799 cpu : usr=2.20%, sys=2.60%, ctx=5076, majf=0, minf=1 00:39:25.799 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:25.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.799 issued rwts: total=2515,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:25.799 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:25.799 00:39:25.799 Run status group 0 (all jobs): 00:39:25.799 READ: bw=9.81MiB/s (10.3MB/s), 9.81MiB/s-9.81MiB/s (10.3MB/s-10.3MB/s), io=9.82MiB (10.3MB), run=1001-1001msec 00:39:25.799 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:39:25.799 00:39:25.799 Disk stats (read/write): 00:39:25.799 nvme0n1: ios=2121/2560, merge=0/0, ticks=542/400, in_queue=942, util=95.59% 00:39:25.799 23:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:26.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:26.058 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:26.058 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:39:26.058 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:26.059 rmmod nvme_tcp 00:39:26.059 rmmod nvme_fabrics 00:39:26.059 rmmod nvme_keyring 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 441725 ']' 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 441725 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 441725 ']' 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 441725 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 441725 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 441725' 00:39:26.059 killing process with pid 441725 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 441725 00:39:26.059 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 441725 00:39:26.317 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:26.317 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:26.317 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:26.317 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:26.317 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:39:26.317 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:26.317 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:39:26.317 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:26.318 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:26.318 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:26.318 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:26.318 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:28.228 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:28.228 00:39:28.228 real 0m9.436s 00:39:28.228 user 0m17.163s 00:39:28.228 sys 0m3.686s 00:39:28.228 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:28.228 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:28.228 ************************************ 00:39:28.228 END TEST nvmf_nmic 00:39:28.228 ************************************ 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:28.488 ************************************ 00:39:28.488 START TEST nvmf_fio_target 00:39:28.488 ************************************ 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:28.488 * Looking for test storage... 00:39:28.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:28.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.488 --rc genhtml_branch_coverage=1 00:39:28.488 --rc genhtml_function_coverage=1 00:39:28.488 --rc genhtml_legend=1 00:39:28.488 --rc geninfo_all_blocks=1 00:39:28.488 --rc geninfo_unexecuted_blocks=1 00:39:28.488 00:39:28.488 ' 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:28.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.488 --rc genhtml_branch_coverage=1 00:39:28.488 --rc genhtml_function_coverage=1 00:39:28.488 --rc genhtml_legend=1 00:39:28.488 --rc geninfo_all_blocks=1 00:39:28.488 --rc geninfo_unexecuted_blocks=1 00:39:28.488 00:39:28.488 ' 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:28.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.488 --rc genhtml_branch_coverage=1 00:39:28.488 --rc genhtml_function_coverage=1 00:39:28.488 --rc genhtml_legend=1 00:39:28.488 --rc geninfo_all_blocks=1 00:39:28.488 --rc geninfo_unexecuted_blocks=1 00:39:28.488 00:39:28.488 ' 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:28.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.488 --rc genhtml_branch_coverage=1 00:39:28.488 --rc genhtml_function_coverage=1 00:39:28.488 --rc genhtml_legend=1 00:39:28.488 --rc geninfo_all_blocks=1 00:39:28.488 --rc geninfo_unexecuted_blocks=1 00:39:28.488 00:39:28.488 ' 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:28.488 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:28.489 23:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:31.024 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:31.024 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:31.024 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:31.025 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:31.025 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:31.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:31.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:39:31.025 00:39:31.025 --- 10.0.0.2 ping statistics --- 00:39:31.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:31.025 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:31.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:31.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:39:31.025 00:39:31.025 --- 10.0.0.1 ping statistics --- 00:39:31.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:31.025 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:31.025 23:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=444306 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 444306 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 444306 ']' 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:31.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:31.025 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:31.025 [2024-10-11 23:02:34.061506] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:31.025 [2024-10-11 23:02:34.062651] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:39:31.025 [2024-10-11 23:02:34.062708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:31.025 [2024-10-11 23:02:34.129208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:31.025 [2024-10-11 23:02:34.179299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:31.025 [2024-10-11 23:02:34.179368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:31.025 [2024-10-11 23:02:34.179382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:31.025 [2024-10-11 23:02:34.179393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:31.025 [2024-10-11 23:02:34.179403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:31.025 [2024-10-11 23:02:34.181009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:31.025 [2024-10-11 23:02:34.181075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:31.025 [2024-10-11 23:02:34.181142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:31.025 [2024-10-11 23:02:34.181144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:31.025 [2024-10-11 23:02:34.272841] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:31.025 [2024-10-11 23:02:34.273026] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:31.025 [2024-10-11 23:02:34.273334] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:31.025 [2024-10-11 23:02:34.273996] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:31.026 [2024-10-11 23:02:34.274203] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:31.284 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:31.284 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:39:31.285 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:31.285 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:31.285 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:31.285 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:31.285 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:31.543 [2024-10-11 23:02:34.577877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:31.543 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:31.801 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:31.802 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:32.060 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:32.060 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:32.317 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:32.317 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:32.576 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:32.576 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:32.847 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:33.417 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:33.417 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:33.417 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:33.417 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:33.985 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:33.985 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:33.985 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:34.244 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:34.244 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:34.810 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:34.810 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:34.810 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:35.068 [2024-10-11 23:02:38.306034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:35.068 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:35.638 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:35.638 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:35.897 23:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:35.897 23:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:39:35.897 23:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:35.897 23:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:39:35.897 23:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:39:35.897 23:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:39:38.430 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:38.430 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:38.430 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:38.430 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:39:38.430 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:38.430 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:39:38.430 23:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:38.430 [global] 00:39:38.430 thread=1 00:39:38.430 invalidate=1 00:39:38.430 rw=write 00:39:38.430 time_based=1 00:39:38.430 runtime=1 00:39:38.430 ioengine=libaio 00:39:38.430 direct=1 00:39:38.430 bs=4096 00:39:38.430 iodepth=1 00:39:38.430 norandommap=0 00:39:38.430 numjobs=1 00:39:38.430 00:39:38.430 verify_dump=1 00:39:38.430 verify_backlog=512 00:39:38.430 verify_state_save=0 00:39:38.430 do_verify=1 00:39:38.430 verify=crc32c-intel 00:39:38.430 [job0] 00:39:38.430 filename=/dev/nvme0n1 00:39:38.430 [job1] 00:39:38.430 filename=/dev/nvme0n2 00:39:38.430 [job2] 00:39:38.430 filename=/dev/nvme0n3 00:39:38.430 [job3] 00:39:38.430 filename=/dev/nvme0n4 00:39:38.430 Could not set queue depth (nvme0n1) 00:39:38.430 Could not set queue depth (nvme0n2) 00:39:38.430 Could not set queue depth (nvme0n3) 00:39:38.430 Could not set queue depth (nvme0n4) 00:39:38.430 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:38.430 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:38.430 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:38.430 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:38.430 fio-3.35 00:39:38.430 Starting 4 threads 00:39:39.366 00:39:39.366 job0: (groupid=0, jobs=1): err= 0: pid=445254: Fri Oct 11 23:02:42 2024 00:39:39.366 read: IOPS=242, BW=968KiB/s (991kB/s)(972KiB/1004msec) 00:39:39.366 slat (nsec): min=4680, max=36012, avg=11100.36, stdev=4864.13 00:39:39.366 clat (usec): min=280, max=41165, avg=3523.11, stdev=10930.00 00:39:39.366 lat (usec): min=285, max=41179, avg=3534.21, stdev=10931.08 00:39:39.366 clat percentiles (usec): 00:39:39.366 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:39:39.366 | 30.00th=[ 314], 40.00th=[ 330], 50.00th=[ 351], 60.00th=[ 383], 00:39:39.366 | 70.00th=[ 392], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[41157], 00:39:39.366 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:39.366 | 99.99th=[41157] 00:39:39.366 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:39:39.366 slat (nsec): min=5891, max=31273, avg=9710.36, stdev=3638.67 00:39:39.366 clat (usec): min=163, max=526, avg=268.23, stdev=62.84 00:39:39.366 lat (usec): min=169, max=536, avg=277.94, stdev=64.14 00:39:39.366 clat percentiles (usec): 00:39:39.366 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 194], 00:39:39.366 | 30.00th=[ 227], 40.00th=[ 245], 50.00th=[ 285], 60.00th=[ 310], 00:39:39.366 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 347], 00:39:39.366 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 529], 99.95th=[ 529], 00:39:39.366 | 99.99th=[ 529] 00:39:39.366 bw ( KiB/s): min= 4096, max= 4096, per=18.89%, avg=4096.00, stdev= 0.00, samples=1 00:39:39.366 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:39.366 lat (usec) : 250=29.80%, 500=67.55%, 750=0.13% 00:39:39.366 lat (msec) : 50=2.52% 00:39:39.366 cpu : usr=0.50%, sys=0.60%, ctx=755, majf=0, minf=2 00:39:39.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.366 issued rwts: total=243,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.366 job1: (groupid=0, jobs=1): err= 0: pid=445255: Fri Oct 11 23:02:42 2024 00:39:39.366 read: IOPS=20, BW=81.8KiB/s (83.8kB/s)(84.0KiB/1027msec) 00:39:39.366 slat (nsec): min=8470, max=42985, avg=15775.57, stdev=7074.10 00:39:39.366 clat (usec): min=40747, max=41058, avg=40966.87, stdev=70.03 00:39:39.366 lat (usec): min=40755, max=41072, avg=40982.64, stdev=68.63 00:39:39.366 clat percentiles (usec): 00:39:39.366 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:39:39.366 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:39.366 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:39.366 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:39.366 | 99.99th=[41157] 00:39:39.366 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:39:39.366 slat (nsec): min=8292, max=32701, avg=11506.02, stdev=4267.89 00:39:39.366 clat (usec): min=181, max=496, avg=308.64, stdev=52.47 00:39:39.366 lat (usec): min=194, max=513, avg=320.14, stdev=51.84 00:39:39.366 clat percentiles (usec): 00:39:39.366 | 1.00th=[ 202], 5.00th=[ 223], 10.00th=[ 233], 20.00th=[ 253], 00:39:39.366 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 322], 00:39:39.366 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 375], 95.00th=[ 400], 00:39:39.366 | 99.00th=[ 429], 99.50th=[ 465], 99.90th=[ 498], 99.95th=[ 498], 00:39:39.366 | 99.99th=[ 498] 00:39:39.366 bw ( KiB/s): min= 4096, max= 4096, per=18.89%, avg=4096.00, stdev= 0.00, samples=1 00:39:39.366 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:39.366 lat (usec) : 250=16.89%, 500=79.17% 00:39:39.366 lat (msec) : 50=3.94% 00:39:39.366 cpu : usr=0.10%, sys=0.97%, ctx=534, majf=0, minf=1 00:39:39.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.366 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.366 job2: (groupid=0, jobs=1): err= 0: pid=445257: Fri Oct 11 23:02:42 2024 00:39:39.366 read: IOPS=2068, BW=8276KiB/s (8474kB/s)(8284KiB/1001msec) 00:39:39.366 slat (nsec): min=4508, max=29886, avg=6316.94, stdev=2928.91 00:39:39.366 clat (usec): min=206, max=11164, avg=249.22, stdev=241.66 00:39:39.366 lat (usec): min=213, max=11171, avg=255.54, stdev=241.79 00:39:39.366 clat percentiles (usec): 00:39:39.366 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 227], 00:39:39.366 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:39:39.366 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 293], 00:39:39.366 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 449], 99.95th=[ 474], 00:39:39.366 | 99.99th=[11207] 00:39:39.366 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:39:39.366 slat (nsec): min=5823, max=53072, avg=8694.18, stdev=3845.38 00:39:39.366 clat (usec): min=149, max=508, avg=171.74, stdev=17.03 00:39:39.366 lat (usec): min=155, max=523, avg=180.43, stdev=17.74 00:39:39.366 clat percentiles (usec): 00:39:39.366 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:39:39.366 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:39:39.366 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 196], 00:39:39.366 | 99.00th=[ 237], 99.50th=[ 249], 99.90th=[ 326], 99.95th=[ 326], 00:39:39.366 | 99.99th=[ 510] 00:39:39.366 bw ( KiB/s): min= 9400, max= 9400, per=43.35%, avg=9400.00, stdev= 0.00, samples=1 00:39:39.366 iops : min= 2350, max= 2350, avg=2350.00, stdev= 0.00, samples=1 00:39:39.366 lat (usec) : 250=88.58%, 500=11.38%, 750=0.02% 00:39:39.366 lat (msec) : 20=0.02% 00:39:39.366 cpu : usr=2.00%, sys=3.80%, ctx=4633, majf=0, minf=1 00:39:39.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.366 issued rwts: total=2071,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.366 job3: (groupid=0, jobs=1): err= 0: pid=445261: Fri Oct 11 23:02:42 2024 00:39:39.366 read: IOPS=1481, BW=5925KiB/s (6067kB/s)(6156KiB/1039msec) 00:39:39.366 slat (nsec): min=4338, max=33963, avg=7827.24, stdev=3844.32 00:39:39.366 clat (usec): min=223, max=41085, avg=365.22, stdev=1797.02 00:39:39.366 lat (usec): min=231, max=41092, avg=373.05, stdev=1797.56 00:39:39.366 clat percentiles (usec): 00:39:39.366 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 249], 00:39:39.366 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 273], 00:39:39.366 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 375], 95.00th=[ 383], 00:39:39.366 | 99.00th=[ 412], 99.50th=[ 441], 99.90th=[41157], 99.95th=[41157], 00:39:39.366 | 99.99th=[41157] 00:39:39.366 write: IOPS=1971, BW=7885KiB/s (8074kB/s)(8192KiB/1039msec); 0 zone resets 00:39:39.366 slat (nsec): min=5475, max=38482, avg=8567.43, stdev=3850.31 00:39:39.366 clat (usec): min=155, max=473, avg=213.88, stdev=71.83 00:39:39.366 lat (usec): min=162, max=487, avg=222.45, stdev=73.57 00:39:39.366 clat percentiles (usec): 00:39:39.366 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:39:39.366 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:39:39.366 | 70.00th=[ 194], 80.00th=[ 247], 90.00th=[ 367], 95.00th=[ 383], 00:39:39.366 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 461], 99.95th=[ 474], 00:39:39.366 | 99.99th=[ 474] 00:39:39.366 bw ( KiB/s): min= 8192, max= 8192, per=37.78%, avg=8192.00, stdev= 0.00, samples=2 00:39:39.366 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:39:39.366 lat (usec) : 250=54.73%, 500=45.16% 00:39:39.366 lat (msec) : 2=0.03%, 50=0.08% 00:39:39.366 cpu : usr=1.54%, sys=3.18%, ctx=3587, majf=0, minf=1 00:39:39.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.366 issued rwts: total=1539,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.366 00:39:39.366 Run status group 0 (all jobs): 00:39:39.366 READ: bw=14.6MiB/s (15.3MB/s), 81.8KiB/s-8276KiB/s (83.8kB/s-8474kB/s), io=15.1MiB (15.9MB), run=1001-1039msec 00:39:39.367 WRITE: bw=21.2MiB/s (22.2MB/s), 1994KiB/s-9.99MiB/s (2042kB/s-10.5MB/s), io=22.0MiB (23.1MB), run=1001-1039msec 00:39:39.367 00:39:39.367 Disk stats (read/write): 00:39:39.367 nvme0n1: ios=289/512, merge=0/0, ticks=802/138, in_queue=940, util=95.09% 00:39:39.367 nvme0n2: ios=66/512, merge=0/0, ticks=1053/155, in_queue=1208, util=98.37% 00:39:39.367 nvme0n3: ios=1893/2048, merge=0/0, ticks=840/339, in_queue=1179, util=98.22% 00:39:39.367 nvme0n4: ios=1536/1697, merge=0/0, ticks=426/375, in_queue=801, util=89.58% 00:39:39.367 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:39.367 [global] 00:39:39.367 thread=1 00:39:39.367 invalidate=1 00:39:39.367 rw=randwrite 00:39:39.367 time_based=1 00:39:39.367 runtime=1 00:39:39.367 ioengine=libaio 00:39:39.367 direct=1 00:39:39.367 bs=4096 00:39:39.367 iodepth=1 00:39:39.367 norandommap=0 00:39:39.367 numjobs=1 00:39:39.367 00:39:39.367 verify_dump=1 00:39:39.367 verify_backlog=512 00:39:39.367 verify_state_save=0 00:39:39.367 do_verify=1 00:39:39.367 verify=crc32c-intel 00:39:39.367 [job0] 00:39:39.367 filename=/dev/nvme0n1 00:39:39.367 [job1] 00:39:39.367 filename=/dev/nvme0n2 00:39:39.367 [job2] 00:39:39.367 filename=/dev/nvme0n3 00:39:39.367 [job3] 00:39:39.367 filename=/dev/nvme0n4 00:39:39.625 Could not set queue depth (nvme0n1) 00:39:39.625 Could not set queue depth (nvme0n2) 00:39:39.625 Could not set queue depth (nvme0n3) 00:39:39.625 Could not set queue depth (nvme0n4) 00:39:39.625 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:39.625 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:39.625 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:39.625 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:39.625 fio-3.35 00:39:39.625 Starting 4 threads 00:39:41.002 00:39:41.003 job0: (groupid=0, jobs=1): err= 0: pid=445595: Fri Oct 11 23:02:44 2024 00:39:41.003 read: IOPS=132, BW=531KiB/s (544kB/s)(540KiB/1017msec) 00:39:41.003 slat (nsec): min=6177, max=63859, avg=16669.46, stdev=5438.08 00:39:41.003 clat (usec): min=221, max=41206, avg=6603.53, stdev=14830.52 00:39:41.003 lat (usec): min=246, max=41216, avg=6620.20, stdev=14828.65 00:39:41.003 clat percentiles (usec): 00:39:41.003 | 1.00th=[ 235], 5.00th=[ 249], 10.00th=[ 251], 20.00th=[ 255], 00:39:41.003 | 30.00th=[ 255], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:39:41.003 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[41157], 95.00th=[41157], 00:39:41.003 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:41.003 | 99.99th=[41157] 00:39:41.003 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:39:41.003 slat (nsec): min=7253, max=51408, avg=18125.70, stdev=7739.84 00:39:41.003 clat (usec): min=161, max=306, avg=214.94, stdev=17.84 00:39:41.003 lat (usec): min=169, max=331, avg=233.07, stdev=20.83 00:39:41.003 clat percentiles (usec): 00:39:41.003 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 200], 00:39:41.003 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 221], 00:39:41.003 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 237], 95.00th=[ 243], 00:39:41.003 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 306], 99.95th=[ 306], 00:39:41.003 | 99.99th=[ 306] 00:39:41.003 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:39:41.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:41.003 lat (usec) : 250=79.29%, 500=17.47% 00:39:41.003 lat (msec) : 50=3.25% 00:39:41.003 cpu : usr=0.59%, sys=1.67%, ctx=647, majf=0, minf=2 00:39:41.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.003 issued rwts: total=135,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:41.003 job1: (groupid=0, jobs=1): err= 0: pid=445596: Fri Oct 11 23:02:44 2024 00:39:41.003 read: IOPS=288, BW=1156KiB/s (1183kB/s)(1188KiB/1028msec) 00:39:41.003 slat (nsec): min=4061, max=32392, avg=10434.08, stdev=5768.09 00:39:41.003 clat (usec): min=210, max=41091, avg=2993.25, stdev=10223.55 00:39:41.003 lat (usec): min=215, max=41103, avg=3003.68, stdev=10224.96 00:39:41.003 clat percentiles (usec): 00:39:41.003 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 223], 00:39:41.003 | 30.00th=[ 225], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 233], 00:39:41.003 | 70.00th=[ 239], 80.00th=[ 273], 90.00th=[ 469], 95.00th=[41157], 00:39:41.003 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:41.003 | 99.99th=[41157] 00:39:41.003 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:39:41.003 slat (nsec): min=5707, max=38441, avg=12816.04, stdev=5734.76 00:39:41.003 clat (usec): min=161, max=422, avg=245.43, stdev=23.20 00:39:41.003 lat (usec): min=181, max=439, avg=258.25, stdev=23.06 00:39:41.003 clat percentiles (usec): 00:39:41.003 | 1.00th=[ 202], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 233], 00:39:41.003 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:39:41.003 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 289], 00:39:41.003 | 99.00th=[ 343], 99.50th=[ 367], 99.90th=[ 424], 99.95th=[ 424], 00:39:41.003 | 99.99th=[ 424] 00:39:41.003 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:39:41.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:41.003 lat (usec) : 250=70.58%, 500=26.58%, 750=0.25%, 1000=0.12% 00:39:41.003 lat (msec) : 50=2.47% 00:39:41.003 cpu : usr=0.29%, sys=1.07%, ctx=809, majf=0, minf=2 00:39:41.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.003 issued rwts: total=297,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:41.003 job2: (groupid=0, jobs=1): err= 0: pid=445599: Fri Oct 11 23:02:44 2024 00:39:41.003 read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec) 00:39:41.003 slat (nsec): min=6733, max=35209, avg=16934.09, stdev=8657.25 00:39:41.003 clat (usec): min=355, max=41016, avg=39196.50, stdev=8467.35 00:39:41.003 lat (usec): min=362, max=41034, avg=39213.44, stdev=8469.56 00:39:41.003 clat percentiles (usec): 00:39:41.003 | 1.00th=[ 355], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:39:41.003 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:41.003 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:41.003 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:41.003 | 99.99th=[41157] 00:39:41.003 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:39:41.003 slat (nsec): min=7723, max=66059, avg=18861.07, stdev=8102.45 00:39:41.003 clat (usec): min=158, max=531, avg=242.39, stdev=42.63 00:39:41.003 lat (usec): min=166, max=552, avg=261.25, stdev=42.11 00:39:41.003 clat percentiles (usec): 00:39:41.003 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 212], 00:39:41.003 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 243], 00:39:41.003 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 330], 00:39:41.003 | 99.00th=[ 404], 99.50th=[ 441], 99.90th=[ 529], 99.95th=[ 529], 00:39:41.003 | 99.99th=[ 529] 00:39:41.003 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:39:41.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:41.003 lat (usec) : 250=65.42%, 500=30.28%, 750=0.19% 00:39:41.003 lat (msec) : 50=4.11% 00:39:41.003 cpu : usr=0.29%, sys=1.54%, ctx=535, majf=0, minf=1 00:39:41.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.003 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:41.003 job3: (groupid=0, jobs=1): err= 0: pid=445600: Fri Oct 11 23:02:44 2024 00:39:41.003 read: IOPS=356, BW=1424KiB/s (1459kB/s)(1480KiB/1039msec) 00:39:41.003 slat (nsec): min=4747, max=33956, avg=10546.74, stdev=3913.12 00:39:41.003 clat (usec): min=209, max=41053, avg=2479.64, stdev=8963.42 00:39:41.003 lat (usec): min=215, max=41066, avg=2490.19, stdev=8964.61 00:39:41.003 clat percentiles (usec): 00:39:41.003 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 241], 00:39:41.003 | 30.00th=[ 251], 40.00th=[ 388], 50.00th=[ 449], 60.00th=[ 474], 00:39:41.003 | 70.00th=[ 502], 80.00th=[ 529], 90.00th=[ 586], 95.00th=[40633], 00:39:41.003 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:41.003 | 99.99th=[41157] 00:39:41.003 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:39:41.003 slat (nsec): min=6658, max=35772, avg=13623.55, stdev=5320.94 00:39:41.003 clat (usec): min=176, max=275, avg=208.95, stdev=11.85 00:39:41.003 lat (usec): min=192, max=310, avg=222.57, stdev=13.29 00:39:41.003 clat percentiles (usec): 00:39:41.003 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:39:41.003 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 210], 00:39:41.003 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 225], 95.00th=[ 229], 00:39:41.003 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 277], 99.95th=[ 277], 00:39:41.003 | 99.99th=[ 277] 00:39:41.003 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:39:41.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:41.003 lat (usec) : 250=69.39%, 500=17.91%, 750=10.43% 00:39:41.003 lat (msec) : 4=0.11%, 50=2.15% 00:39:41.003 cpu : usr=0.87%, sys=0.77%, ctx=882, majf=0, minf=1 00:39:41.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.003 issued rwts: total=370,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:41.003 00:39:41.003 Run status group 0 (all jobs): 00:39:41.003 READ: bw=3176KiB/s (3252kB/s), 88.5KiB/s-1424KiB/s (90.7kB/s-1459kB/s), io=3300KiB (3379kB), run=1017-1039msec 00:39:41.003 WRITE: bw=7885KiB/s (8074kB/s), 1971KiB/s-2014KiB/s (2018kB/s-2062kB/s), io=8192KiB (8389kB), run=1017-1039msec 00:39:41.003 00:39:41.003 Disk stats (read/write): 00:39:41.003 nvme0n1: ios=180/512, merge=0/0, ticks=719/104, in_queue=823, util=87.07% 00:39:41.003 nvme0n2: ios=312/512, merge=0/0, ticks=695/120, in_queue=815, util=86.50% 00:39:41.003 nvme0n3: ios=18/512, merge=0/0, ticks=697/113, in_queue=810, util=88.81% 00:39:41.003 nvme0n4: ios=365/512, merge=0/0, ticks=713/100, in_queue=813, util=89.56% 00:39:41.003 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:41.003 [global] 00:39:41.003 thread=1 00:39:41.003 invalidate=1 00:39:41.003 rw=write 00:39:41.003 time_based=1 00:39:41.003 runtime=1 00:39:41.003 ioengine=libaio 00:39:41.003 direct=1 00:39:41.003 bs=4096 00:39:41.003 iodepth=128 00:39:41.003 norandommap=0 00:39:41.003 numjobs=1 00:39:41.003 00:39:41.003 verify_dump=1 00:39:41.003 verify_backlog=512 00:39:41.003 verify_state_save=0 00:39:41.003 do_verify=1 00:39:41.003 verify=crc32c-intel 00:39:41.003 [job0] 00:39:41.003 filename=/dev/nvme0n1 00:39:41.003 [job1] 00:39:41.003 filename=/dev/nvme0n2 00:39:41.003 [job2] 00:39:41.003 filename=/dev/nvme0n3 00:39:41.003 [job3] 00:39:41.003 filename=/dev/nvme0n4 00:39:41.003 Could not set queue depth (nvme0n1) 00:39:41.003 Could not set queue depth (nvme0n2) 00:39:41.003 Could not set queue depth (nvme0n3) 00:39:41.003 Could not set queue depth (nvme0n4) 00:39:41.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:41.262 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:41.262 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:41.262 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:41.262 fio-3.35 00:39:41.262 Starting 4 threads 00:39:42.644 00:39:42.644 job0: (groupid=0, jobs=1): err= 0: pid=445828: Fri Oct 11 23:02:45 2024 00:39:42.644 read: IOPS=4119, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1002msec) 00:39:42.644 slat (usec): min=2, max=44069, avg=126.90, stdev=1091.58 00:39:42.644 clat (usec): min=560, max=58913, avg=16169.97, stdev=11244.44 00:39:42.644 lat (usec): min=3019, max=58930, avg=16296.88, stdev=11285.03 00:39:42.644 clat percentiles (usec): 00:39:42.644 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[10421], 20.00th=[10945], 00:39:42.644 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12125], 60.00th=[12649], 00:39:42.644 | 70.00th=[13304], 80.00th=[15139], 90.00th=[27919], 95.00th=[54264], 00:39:42.644 | 99.00th=[56361], 99.50th=[58459], 99.90th=[58983], 99.95th=[58983], 00:39:42.644 | 99.99th=[58983] 00:39:42.644 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:39:42.644 slat (usec): min=3, max=8570, avg=94.51, stdev=469.97 00:39:42.644 clat (usec): min=5273, max=35241, avg=12932.41, stdev=4178.14 00:39:42.644 lat (usec): min=5870, max=35247, avg=13026.92, stdev=4181.46 00:39:42.644 clat percentiles (usec): 00:39:42.644 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10814], 00:39:42.644 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:39:42.644 | 70.00th=[12649], 80.00th=[13304], 90.00th=[16319], 95.00th=[24773], 00:39:42.644 | 99.00th=[30540], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:39:42.644 | 99.99th=[35390] 00:39:42.644 bw ( KiB/s): min=15624, max=20480, per=26.74%, avg=18052.00, stdev=3433.71, samples=2 00:39:42.644 iops : min= 3906, max= 5120, avg=4513.00, stdev=858.43, samples=2 00:39:42.644 lat (usec) : 750=0.01% 00:39:42.644 lat (msec) : 4=0.18%, 10=8.76%, 20=80.70%, 50=7.44%, 100=2.91% 00:39:42.644 cpu : usr=5.09%, sys=8.59%, ctx=481, majf=0, minf=1 00:39:42.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:42.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:42.644 issued rwts: total=4128,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:42.644 job1: (groupid=0, jobs=1): err= 0: pid=445829: Fri Oct 11 23:02:45 2024 00:39:42.644 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:39:42.644 slat (usec): min=2, max=32297, avg=179.63, stdev=1282.29 00:39:42.644 clat (usec): min=7848, max=84943, avg=23977.39, stdev=17547.97 00:39:42.644 lat (usec): min=7864, max=84951, avg=24157.02, stdev=17670.41 00:39:42.644 clat percentiles (usec): 00:39:42.644 | 1.00th=[10421], 5.00th=[11600], 10.00th=[11994], 20.00th=[12649], 00:39:42.644 | 30.00th=[12911], 40.00th=[15139], 50.00th=[16450], 60.00th=[17957], 00:39:42.644 | 70.00th=[20055], 80.00th=[32113], 90.00th=[52167], 95.00th=[64226], 00:39:42.644 | 99.00th=[76022], 99.50th=[79168], 99.90th=[85459], 99.95th=[85459], 00:39:42.644 | 99.99th=[85459] 00:39:42.644 write: IOPS=3372, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1003msec); 0 zone resets 00:39:42.644 slat (usec): min=2, max=15074, avg=125.22, stdev=697.72 00:39:42.644 clat (usec): min=1735, max=51074, avg=15674.96, stdev=6276.21 00:39:42.644 lat (usec): min=6593, max=51082, avg=15800.18, stdev=6323.69 00:39:42.644 clat percentiles (usec): 00:39:42.644 | 1.00th=[ 6915], 5.00th=[10028], 10.00th=[11207], 20.00th=[12125], 00:39:42.644 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13435], 60.00th=[14222], 00:39:42.645 | 70.00th=[15926], 80.00th=[16909], 90.00th=[26608], 95.00th=[32113], 00:39:42.645 | 99.00th=[37487], 99.50th=[39060], 99.90th=[45351], 99.95th=[51119], 00:39:42.645 | 99.99th=[51119] 00:39:42.645 bw ( KiB/s): min=12968, max=13080, per=19.29%, avg=13024.00, stdev=79.20, samples=2 00:39:42.645 iops : min= 3242, max= 3270, avg=3256.00, stdev=19.80, samples=2 00:39:42.645 lat (msec) : 2=0.02%, 10=3.02%, 20=75.69%, 50=15.82%, 100=5.45% 00:39:42.645 cpu : usr=3.19%, sys=4.89%, ctx=312, majf=0, minf=1 00:39:42.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:39:42.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:42.645 issued rwts: total=3072,3383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:42.645 job2: (groupid=0, jobs=1): err= 0: pid=445830: Fri Oct 11 23:02:45 2024 00:39:42.645 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:39:42.645 slat (usec): min=2, max=13654, avg=106.72, stdev=734.91 00:39:42.645 clat (usec): min=5190, max=26694, avg=14164.04, stdev=3322.68 00:39:42.645 lat (usec): min=5195, max=26704, avg=14270.76, stdev=3354.64 00:39:42.645 clat percentiles (usec): 00:39:42.645 | 1.00th=[ 6063], 5.00th=[ 8356], 10.00th=[ 9896], 20.00th=[11600], 00:39:42.645 | 30.00th=[12518], 40.00th=[13698], 50.00th=[14353], 60.00th=[15401], 00:39:42.645 | 70.00th=[15664], 80.00th=[16909], 90.00th=[17957], 95.00th=[19006], 00:39:42.645 | 99.00th=[22938], 99.50th=[24511], 99.90th=[26608], 99.95th=[26608], 00:39:42.645 | 99.99th=[26608] 00:39:42.645 write: IOPS=4401, BW=17.2MiB/s (18.0MB/s)(17.4MiB/1010msec); 0 zone resets 00:39:42.645 slat (usec): min=3, max=12595, avg=114.81, stdev=717.78 00:39:42.645 clat (usec): min=288, max=54707, avg=15645.78, stdev=7633.67 00:39:42.645 lat (usec): min=437, max=54714, avg=15760.59, stdev=7673.56 00:39:42.645 clat percentiles (usec): 00:39:42.645 | 1.00th=[ 4490], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[12125], 00:39:42.645 | 30.00th=[12780], 40.00th=[13173], 50.00th=[14091], 60.00th=[14746], 00:39:42.645 | 70.00th=[15139], 80.00th=[15664], 90.00th=[23725], 95.00th=[34341], 00:39:42.645 | 99.00th=[46400], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:39:42.645 | 99.99th=[54789] 00:39:42.645 bw ( KiB/s): min=17032, max=17520, per=25.59%, avg=17276.00, stdev=345.07, samples=2 00:39:42.645 iops : min= 4258, max= 4380, avg=4319.00, stdev=86.27, samples=2 00:39:42.645 lat (usec) : 500=0.01% 00:39:42.645 lat (msec) : 4=0.02%, 10=9.47%, 20=82.37%, 50=7.74%, 100=0.39% 00:39:42.645 cpu : usr=3.87%, sys=4.96%, ctx=356, majf=0, minf=1 00:39:42.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:42.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:42.645 issued rwts: total=4096,4446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:42.645 job3: (groupid=0, jobs=1): err= 0: pid=445831: Fri Oct 11 23:02:45 2024 00:39:42.645 read: IOPS=4455, BW=17.4MiB/s (18.2MB/s)(17.5MiB/1003msec) 00:39:42.645 slat (usec): min=3, max=8240, avg=104.99, stdev=573.97 00:39:42.645 clat (usec): min=1598, max=22926, avg=13659.69, stdev=2334.52 00:39:42.645 lat (usec): min=4064, max=22935, avg=13764.68, stdev=2364.91 00:39:42.645 clat percentiles (usec): 00:39:42.645 | 1.00th=[ 7701], 5.00th=[10683], 10.00th=[11207], 20.00th=[11863], 00:39:42.645 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13698], 60.00th=[14091], 00:39:42.645 | 70.00th=[14615], 80.00th=[15139], 90.00th=[16450], 95.00th=[17695], 00:39:42.645 | 99.00th=[20317], 99.50th=[22938], 99.90th=[22938], 99.95th=[22938], 00:39:42.645 | 99.99th=[22938] 00:39:42.645 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:39:42.645 slat (usec): min=4, max=12344, avg=104.86, stdev=576.91 00:39:42.645 clat (usec): min=7302, max=33497, avg=14267.01, stdev=2534.62 00:39:42.645 lat (usec): min=7336, max=33512, avg=14371.88, stdev=2591.70 00:39:42.645 clat percentiles (usec): 00:39:42.645 | 1.00th=[ 8717], 5.00th=[11600], 10.00th=[12125], 20.00th=[12649], 00:39:42.645 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[14222], 00:39:42.645 | 70.00th=[14615], 80.00th=[15270], 90.00th=[16712], 95.00th=[19792], 00:39:42.645 | 99.00th=[24249], 99.50th=[24511], 99.90th=[26870], 99.95th=[29492], 00:39:42.645 | 99.99th=[33424] 00:39:42.645 bw ( KiB/s): min=18136, max=18728, per=27.30%, avg=18432.00, stdev=418.61, samples=2 00:39:42.645 iops : min= 4534, max= 4682, avg=4608.00, stdev=104.65, samples=2 00:39:42.645 lat (msec) : 2=0.01%, 10=2.47%, 20=95.03%, 50=2.49% 00:39:42.645 cpu : usr=5.79%, sys=10.58%, ctx=451, majf=0, minf=2 00:39:42.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:42.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:42.645 issued rwts: total=4469,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:42.645 00:39:42.645 Run status group 0 (all jobs): 00:39:42.645 READ: bw=61.0MiB/s (63.9MB/s), 12.0MiB/s-17.4MiB/s (12.5MB/s-18.2MB/s), io=61.6MiB (64.6MB), run=1002-1010msec 00:39:42.645 WRITE: bw=65.9MiB/s (69.1MB/s), 13.2MiB/s-18.0MiB/s (13.8MB/s-18.8MB/s), io=66.6MiB (69.8MB), run=1002-1010msec 00:39:42.645 00:39:42.645 Disk stats (read/write): 00:39:42.645 nvme0n1: ios=3634/4032, merge=0/0, ticks=13557/11927, in_queue=25484, util=90.98% 00:39:42.645 nvme0n2: ios=2565/2807, merge=0/0, ticks=20969/13535, in_queue=34504, util=86.79% 00:39:42.645 nvme0n3: ios=3632/3903, merge=0/0, ticks=37241/38068, in_queue=75309, util=99.90% 00:39:42.645 nvme0n4: ios=3634/4012, merge=0/0, ticks=24651/26801, in_queue=51452, util=99.27% 00:39:42.645 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:42.645 [global] 00:39:42.645 thread=1 00:39:42.645 invalidate=1 00:39:42.645 rw=randwrite 00:39:42.645 time_based=1 00:39:42.645 runtime=1 00:39:42.645 ioengine=libaio 00:39:42.645 direct=1 00:39:42.645 bs=4096 00:39:42.645 iodepth=128 00:39:42.645 norandommap=0 00:39:42.645 numjobs=1 00:39:42.645 00:39:42.645 verify_dump=1 00:39:42.645 verify_backlog=512 00:39:42.645 verify_state_save=0 00:39:42.645 do_verify=1 00:39:42.645 verify=crc32c-intel 00:39:42.645 [job0] 00:39:42.645 filename=/dev/nvme0n1 00:39:42.645 [job1] 00:39:42.645 filename=/dev/nvme0n2 00:39:42.645 [job2] 00:39:42.645 filename=/dev/nvme0n3 00:39:42.645 [job3] 00:39:42.645 filename=/dev/nvme0n4 00:39:42.645 Could not set queue depth (nvme0n1) 00:39:42.645 Could not set queue depth (nvme0n2) 00:39:42.645 Could not set queue depth (nvme0n3) 00:39:42.645 Could not set queue depth (nvme0n4) 00:39:42.645 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:42.645 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:42.645 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:42.645 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:42.645 fio-3.35 00:39:42.645 Starting 4 threads 00:39:44.019 00:39:44.019 job0: (groupid=0, jobs=1): err= 0: pid=446061: Fri Oct 11 23:02:47 2024 00:39:44.019 read: IOPS=2162, BW=8651KiB/s (8859kB/s)(8772KiB/1014msec) 00:39:44.019 slat (usec): min=2, max=15607, avg=152.35, stdev=948.86 00:39:44.019 clat (usec): min=8626, max=89622, avg=16297.36, stdev=10076.98 00:39:44.019 lat (usec): min=8632, max=89631, avg=16449.70, stdev=10248.19 00:39:44.019 clat percentiles (usec): 00:39:44.019 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[11338], 20.00th=[11994], 00:39:44.019 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435], 00:39:44.019 | 70.00th=[14353], 80.00th=[15270], 90.00th=[24249], 95.00th=[40109], 00:39:44.019 | 99.00th=[67634], 99.50th=[70779], 99.90th=[79168], 99.95th=[79168], 00:39:44.019 | 99.99th=[89654] 00:39:44.019 write: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec); 0 zone resets 00:39:44.019 slat (usec): min=3, max=15994, avg=254.24, stdev=1477.90 00:39:44.019 clat (msec): min=7, max=194, avg=36.04, stdev=50.89 00:39:44.019 lat (msec): min=7, max=194, avg=36.30, stdev=51.25 00:39:44.019 clat percentiles (msec): 00:39:44.019 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12], 00:39:44.019 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:39:44.019 | 70.00th=[ 20], 80.00th=[ 31], 90.00th=[ 142], 95.00th=[ 180], 00:39:44.019 | 99.00th=[ 192], 99.50th=[ 194], 99.90th=[ 194], 99.95th=[ 194], 00:39:44.019 | 99.99th=[ 194] 00:39:44.019 bw ( KiB/s): min= 3824, max=16656, per=17.34%, avg=10240.00, stdev=9073.59, samples=2 00:39:44.019 iops : min= 956, max= 4164, avg=2560.00, stdev=2268.40, samples=2 00:39:44.019 lat (msec) : 10=1.66%, 20=76.88%, 50=12.50%, 100=2.00%, 250=6.96% 00:39:44.019 cpu : usr=2.76%, sys=1.97%, ctx=214, majf=0, minf=1 00:39:44.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:39:44.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:44.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:44.019 issued rwts: total=2193,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:44.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:44.019 job1: (groupid=0, jobs=1): err= 0: pid=446062: Fri Oct 11 23:02:47 2024 00:39:44.019 read: IOPS=5113, BW=20.0MiB/s (20.9MB/s)(21.0MiB/1049msec) 00:39:44.019 slat (usec): min=2, max=12141, avg=104.72, stdev=749.63 00:39:44.019 clat (usec): min=2931, max=59301, avg=13282.53, stdev=6882.76 00:39:44.019 lat (usec): min=2938, max=59945, avg=13387.24, stdev=6912.08 00:39:44.019 clat percentiles (usec): 00:39:44.019 | 1.00th=[ 3982], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:39:44.019 | 30.00th=[ 9896], 40.00th=[11338], 50.00th=[12125], 60.00th=[12387], 00:39:44.019 | 70.00th=[13173], 80.00th=[15795], 90.00th=[18482], 95.00th=[22414], 00:39:44.019 | 99.00th=[50594], 99.50th=[50594], 99.90th=[59507], 99.95th=[59507], 00:39:44.019 | 99.99th=[59507] 00:39:44.019 write: IOPS=5368, BW=21.0MiB/s (22.0MB/s)(22.0MiB/1049msec); 0 zone resets 00:39:44.019 slat (usec): min=3, max=9358, avg=72.57, stdev=284.70 00:39:44.019 clat (usec): min=1136, max=23957, avg=10990.07, stdev=2691.53 00:39:44.019 lat (usec): min=1145, max=23963, avg=11062.63, stdev=2711.60 00:39:44.019 clat percentiles (usec): 00:39:44.019 | 1.00th=[ 2966], 5.00th=[ 5014], 10.00th=[ 7439], 20.00th=[10159], 00:39:44.019 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:39:44.019 | 70.00th=[12649], 80.00th=[13304], 90.00th=[13829], 95.00th=[14353], 00:39:44.019 | 99.00th=[16188], 99.50th=[16581], 99.90th=[23200], 99.95th=[23725], 00:39:44.019 | 99.99th=[23987] 00:39:44.019 bw ( KiB/s): min=20480, max=24576, per=38.16%, avg=22528.00, stdev=2896.31, samples=2 00:39:44.019 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:39:44.019 lat (msec) : 2=0.06%, 4=1.96%, 10=25.01%, 20=69.14%, 50=2.96% 00:39:44.019 lat (msec) : 100=0.87% 00:39:44.019 cpu : usr=3.91%, sys=6.49%, ctx=736, majf=0, minf=1 00:39:44.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:39:44.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:44.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:44.019 issued rwts: total=5364,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:44.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:44.019 job2: (groupid=0, jobs=1): err= 0: pid=446063: Fri Oct 11 23:02:47 2024 00:39:44.019 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:39:44.019 slat (usec): min=2, max=23309, avg=152.45, stdev=1188.05 00:39:44.019 clat (usec): min=7576, max=58793, avg=20927.88, stdev=11367.78 00:39:44.019 lat (usec): min=7580, max=58798, avg=21080.32, stdev=11454.16 00:39:44.019 clat percentiles (usec): 00:39:44.019 | 1.00th=[ 7570], 5.00th=[10159], 10.00th=[10683], 20.00th=[12256], 00:39:44.019 | 30.00th=[13435], 40.00th=[14484], 50.00th=[16188], 60.00th=[19530], 00:39:44.019 | 70.00th=[21365], 80.00th=[31065], 90.00th=[39584], 95.00th=[42206], 00:39:44.019 | 99.00th=[58983], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:39:44.019 | 99.99th=[58983] 00:39:44.019 write: IOPS=2961, BW=11.6MiB/s (12.1MB/s)(11.7MiB/1014msec); 0 zone resets 00:39:44.019 slat (usec): min=3, max=24202, avg=188.54, stdev=1715.54 00:39:44.019 clat (usec): min=496, max=69170, avg=25023.33, stdev=13528.62 00:39:44.019 lat (usec): min=722, max=69186, avg=25211.87, stdev=13696.50 00:39:44.019 clat percentiles (usec): 00:39:44.019 | 1.00th=[ 2573], 5.00th=[ 6456], 10.00th=[10028], 20.00th=[13435], 00:39:44.019 | 30.00th=[13829], 40.00th=[17433], 50.00th=[22938], 60.00th=[26346], 00:39:44.019 | 70.00th=[35390], 80.00th=[39584], 90.00th=[44827], 95.00th=[45876], 00:39:44.019 | 99.00th=[53216], 99.50th=[53216], 99.90th=[66323], 99.95th=[67634], 00:39:44.019 | 99.99th=[68682] 00:39:44.019 bw ( KiB/s): min= 6616, max=16384, per=19.48%, avg=11500.00, stdev=6907.02, samples=2 00:39:44.019 iops : min= 1654, max= 4096, avg=2875.00, stdev=1726.75, samples=2 00:39:44.019 lat (usec) : 500=0.02%, 750=0.07%, 1000=0.07% 00:39:44.019 lat (msec) : 2=0.14%, 4=0.74%, 10=6.17%, 20=50.03%, 50=41.24% 00:39:44.019 lat (msec) : 100=1.53% 00:39:44.019 cpu : usr=1.58%, sys=2.37%, ctx=152, majf=0, minf=2 00:39:44.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:39:44.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:44.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:44.019 issued rwts: total=2560,3003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:44.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:44.019 job3: (groupid=0, jobs=1): err= 0: pid=446064: Fri Oct 11 23:02:47 2024 00:39:44.019 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:39:44.019 slat (usec): min=2, max=15921, avg=113.63, stdev=907.79 00:39:44.019 clat (usec): min=1985, max=35686, avg=16062.91, stdev=5146.45 00:39:44.019 lat (usec): min=1987, max=35692, avg=16176.55, stdev=5181.88 00:39:44.019 clat percentiles (usec): 00:39:44.019 | 1.00th=[ 4490], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[12256], 00:39:44.019 | 30.00th=[13435], 40.00th=[14353], 50.00th=[16188], 60.00th=[17171], 00:39:44.019 | 70.00th=[19268], 80.00th=[20841], 90.00th=[21890], 95.00th=[23725], 00:39:44.019 | 99.00th=[28705], 99.50th=[30802], 99.90th=[32375], 99.95th=[35914], 00:39:44.019 | 99.99th=[35914] 00:39:44.019 write: IOPS=4232, BW=16.5MiB/s (17.3MB/s)(16.8MiB/1013msec); 0 zone resets 00:39:44.019 slat (usec): min=3, max=19385, avg=100.12, stdev=884.95 00:39:44.019 clat (usec): min=327, max=63448, avg=14627.75, stdev=6517.94 00:39:44.019 lat (usec): min=393, max=63453, avg=14727.87, stdev=6562.86 00:39:44.019 clat percentiles (usec): 00:39:44.019 | 1.00th=[ 5473], 5.00th=[ 7242], 10.00th=[ 8586], 20.00th=[11076], 00:39:44.019 | 30.00th=[11469], 40.00th=[12387], 50.00th=[13566], 60.00th=[13960], 00:39:44.019 | 70.00th=[14484], 80.00th=[19530], 90.00th=[21890], 95.00th=[24249], 00:39:44.019 | 99.00th=[47449], 99.50th=[55313], 99.90th=[63177], 99.95th=[63701], 00:39:44.019 | 99.99th=[63701] 00:39:44.019 bw ( KiB/s): min=16448, max=16888, per=28.23%, avg=16668.00, stdev=311.13, samples=2 00:39:44.019 iops : min= 4112, max= 4222, avg=4167.00, stdev=77.78, samples=2 00:39:44.019 lat (usec) : 500=0.04%, 1000=0.04% 00:39:44.019 lat (msec) : 2=0.05%, 4=0.24%, 10=12.32%, 20=66.22%, 50=20.65% 00:39:44.019 lat (msec) : 100=0.45% 00:39:44.019 cpu : usr=1.88%, sys=5.14%, ctx=247, majf=0, minf=1 00:39:44.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:44.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:44.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:44.019 issued rwts: total=4096,4288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:44.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:44.019 00:39:44.019 Run status group 0 (all jobs): 00:39:44.020 READ: bw=52.9MiB/s (55.5MB/s), 8651KiB/s-20.0MiB/s (8859kB/s-20.9MB/s), io=55.5MiB (58.2MB), run=1013-1049msec 00:39:44.020 WRITE: bw=57.7MiB/s (60.5MB/s), 9.86MiB/s-21.0MiB/s (10.3MB/s-22.0MB/s), io=60.5MiB (63.4MB), run=1013-1049msec 00:39:44.020 00:39:44.020 Disk stats (read/write): 00:39:44.020 nvme0n1: ios=2098/2479, merge=0/0, ticks=10386/23310, in_queue=33696, util=86.97% 00:39:44.020 nvme0n2: ios=4392/4608, merge=0/0, ticks=53413/50891, in_queue=104304, util=91.17% 00:39:44.020 nvme0n3: ios=2449/2560, merge=0/0, ticks=30233/33785, in_queue=64018, util=88.95% 00:39:44.020 nvme0n4: ios=3528/3584, merge=0/0, ticks=52087/48102, in_queue=100189, util=98.32% 00:39:44.020 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:44.020 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=446196 00:39:44.020 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:44.020 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:44.020 [global] 00:39:44.020 thread=1 00:39:44.020 invalidate=1 00:39:44.020 rw=read 00:39:44.020 time_based=1 00:39:44.020 runtime=10 00:39:44.020 ioengine=libaio 00:39:44.020 direct=1 00:39:44.020 bs=4096 00:39:44.020 iodepth=1 00:39:44.020 norandommap=1 00:39:44.020 numjobs=1 00:39:44.020 00:39:44.020 [job0] 00:39:44.020 filename=/dev/nvme0n1 00:39:44.020 [job1] 00:39:44.020 filename=/dev/nvme0n2 00:39:44.020 [job2] 00:39:44.020 filename=/dev/nvme0n3 00:39:44.020 [job3] 00:39:44.020 filename=/dev/nvme0n4 00:39:44.020 Could not set queue depth (nvme0n1) 00:39:44.020 Could not set queue depth (nvme0n2) 00:39:44.020 Could not set queue depth (nvme0n3) 00:39:44.020 Could not set queue depth (nvme0n4) 00:39:44.020 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:44.020 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:44.020 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:44.020 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:44.020 fio-3.35 00:39:44.020 Starting 4 threads 00:39:47.305 23:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:47.305 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=37392384, buflen=4096 00:39:47.305 fio: pid=446407, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:47.306 23:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:47.563 23:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:47.563 23:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:47.563 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=8093696, buflen=4096 00:39:47.563 fio: pid=446406, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:47.821 23:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:47.821 23:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:47.821 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5529600, buflen=4096 00:39:47.821 fio: pid=446404, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:48.080 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:48.080 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:48.080 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=29102080, buflen=4096 00:39:48.080 fio: pid=446405, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:48.080 00:39:48.080 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=446404: Fri Oct 11 23:02:51 2024 00:39:48.080 read: IOPS=382, BW=1530KiB/s (1566kB/s)(5400KiB/3530msec) 00:39:48.080 slat (usec): min=4, max=5905, avg=15.66, stdev=168.51 00:39:48.080 clat (usec): min=196, max=41142, avg=2579.21, stdev=9254.11 00:39:48.080 lat (usec): min=203, max=47047, avg=2594.87, stdev=9281.28 00:39:48.080 clat percentiles (usec): 00:39:48.080 | 1.00th=[ 210], 5.00th=[ 225], 10.00th=[ 239], 20.00th=[ 253], 00:39:48.080 | 30.00th=[ 269], 40.00th=[ 302], 50.00th=[ 338], 60.00th=[ 367], 00:39:48.080 | 70.00th=[ 396], 80.00th=[ 437], 90.00th=[ 502], 95.00th=[41157], 00:39:48.080 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:48.080 | 99.99th=[41157] 00:39:48.080 bw ( KiB/s): min= 96, max= 5848, per=6.70%, avg=1374.67, stdev=2318.23, samples=6 00:39:48.080 iops : min= 24, max= 1462, avg=343.67, stdev=579.56, samples=6 00:39:48.080 lat (usec) : 250=17.62%, 500=72.09%, 750=4.44% 00:39:48.080 lat (msec) : 2=0.07%, 4=0.07%, 10=0.07%, 20=0.07%, 50=5.48% 00:39:48.080 cpu : usr=0.11%, sys=0.45%, ctx=1353, majf=0, minf=2 00:39:48.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.080 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.080 issued rwts: total=1351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.080 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=446405: Fri Oct 11 23:02:51 2024 00:39:48.080 read: IOPS=1862, BW=7448KiB/s (7626kB/s)(27.8MiB/3816msec) 00:39:48.080 slat (usec): min=4, max=11662, avg=13.22, stdev=186.63 00:39:48.080 clat (usec): min=209, max=42233, avg=518.05, stdev=2994.34 00:39:48.080 lat (usec): min=214, max=48963, avg=531.27, stdev=3015.53 00:39:48.080 clat percentiles (usec): 00:39:48.080 | 1.00th=[ 231], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 00:39:48.080 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:39:48.080 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 371], 95.00th=[ 433], 00:39:48.080 | 99.00th=[ 578], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:39:48.080 | 99.99th=[42206] 00:39:48.080 bw ( KiB/s): min= 376, max=12736, per=39.46%, avg=8090.14, stdev=5382.24, samples=7 00:39:48.080 iops : min= 94, max= 3184, avg=2022.43, stdev=1345.73, samples=7 00:39:48.080 lat (usec) : 250=7.98%, 500=90.61%, 750=0.61%, 1000=0.14% 00:39:48.080 lat (msec) : 2=0.08%, 4=0.01%, 10=0.01%, 50=0.53% 00:39:48.080 cpu : usr=1.15%, sys=2.60%, ctx=7109, majf=0, minf=2 00:39:48.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.080 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.080 issued rwts: total=7106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.080 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=446406: Fri Oct 11 23:02:51 2024 00:39:48.080 read: IOPS=611, BW=2446KiB/s (2504kB/s)(7904KiB/3232msec) 00:39:48.080 slat (usec): min=4, max=14962, avg=22.04, stdev=426.33 00:39:48.080 clat (usec): min=182, max=42460, avg=1599.81, stdev=7288.36 00:39:48.080 lat (usec): min=191, max=42473, avg=1621.86, stdev=7300.44 00:39:48.080 clat percentiles (usec): 00:39:48.080 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 221], 00:39:48.080 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 262], 60.00th=[ 273], 00:39:48.080 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 388], 95.00th=[ 529], 00:39:48.080 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:48.080 | 99.99th=[42206] 00:39:48.080 bw ( KiB/s): min= 104, max= 8680, per=7.52%, avg=1542.67, stdev=3496.59, samples=6 00:39:48.080 iops : min= 26, max= 2170, avg=385.67, stdev=874.15, samples=6 00:39:48.080 lat (usec) : 250=43.75%, 500=50.94%, 750=1.06%, 1000=0.61% 00:39:48.080 lat (msec) : 2=0.40%, 50=3.19% 00:39:48.080 cpu : usr=0.34%, sys=0.46%, ctx=1982, majf=0, minf=1 00:39:48.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.080 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.080 issued rwts: total=1977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.080 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=446407: Fri Oct 11 23:02:51 2024 00:39:48.080 read: IOPS=3123, BW=12.2MiB/s (12.8MB/s)(35.7MiB/2923msec) 00:39:48.080 slat (nsec): min=4241, max=65264, avg=11117.72, stdev=6448.95 00:39:48.080 clat (usec): min=203, max=3621, avg=305.29, stdev=77.09 00:39:48.080 lat (usec): min=209, max=3634, avg=316.40, stdev=78.99 00:39:48.080 clat percentiles (usec): 00:39:48.080 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 00:39:48.080 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 293], 00:39:48.080 | 70.00th=[ 310], 80.00th=[ 347], 90.00th=[ 392], 95.00th=[ 420], 00:39:48.080 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 594], 99.95th=[ 644], 00:39:48.080 | 99.99th=[ 3621] 00:39:48.080 bw ( KiB/s): min=11824, max=13336, per=60.63%, avg=12432.00, stdev=603.72, samples=5 00:39:48.080 iops : min= 2956, max= 3334, avg=3108.00, stdev=150.93, samples=5 00:39:48.080 lat (usec) : 250=7.22%, 500=91.24%, 750=1.51% 00:39:48.080 lat (msec) : 4=0.02% 00:39:48.080 cpu : usr=1.78%, sys=4.14%, ctx=9130, majf=0, minf=2 00:39:48.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.080 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.080 issued rwts: total=9130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.080 00:39:48.080 Run status group 0 (all jobs): 00:39:48.080 READ: bw=20.0MiB/s (21.0MB/s), 1530KiB/s-12.2MiB/s (1566kB/s-12.8MB/s), io=76.4MiB (80.1MB), run=2923-3816msec 00:39:48.080 00:39:48.080 Disk stats (read/write): 00:39:48.080 nvme0n1: ios=1346/0, merge=0/0, ticks=3312/0, in_queue=3312, util=95.97% 00:39:48.080 nvme0n2: ios=7100/0, merge=0/0, ticks=3371/0, in_queue=3371, util=95.90% 00:39:48.080 nvme0n3: ios=1578/0, merge=0/0, ticks=3195/0, in_queue=3195, util=98.41% 00:39:48.081 nvme0n4: ios=8934/0, merge=0/0, ticks=2670/0, in_queue=2670, util=96.72% 00:39:48.339 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:48.339 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:48.597 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:48.597 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:48.855 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:48.855 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:49.113 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:49.113 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 446196 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:49.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:49.679 nvmf hotplug test: fio failed as expected 00:39:49.679 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:49.938 rmmod nvme_tcp 00:39:49.938 rmmod nvme_fabrics 00:39:49.938 rmmod nvme_keyring 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 444306 ']' 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 444306 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 444306 ']' 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 444306 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 444306 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 444306' 00:39:49.938 killing process with pid 444306 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 444306 00:39:49.938 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 444306 00:39:50.196 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:50.196 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:50.196 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:50.196 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:50.196 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:39:50.196 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:50.196 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:39:50.196 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:50.196 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:50.196 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:50.196 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:50.196 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:52.733 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:52.733 00:39:52.733 real 0m23.861s 00:39:52.733 user 1m7.546s 00:39:52.733 sys 0m10.075s 00:39:52.733 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:52.733 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:52.733 ************************************ 00:39:52.733 END TEST nvmf_fio_target 00:39:52.733 ************************************ 00:39:52.733 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:52.733 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:52.733 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:52.734 ************************************ 00:39:52.734 START TEST nvmf_bdevio 00:39:52.734 ************************************ 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:52.734 * Looking for test storage... 00:39:52.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:52.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.734 --rc genhtml_branch_coverage=1 00:39:52.734 --rc genhtml_function_coverage=1 00:39:52.734 --rc genhtml_legend=1 00:39:52.734 --rc geninfo_all_blocks=1 00:39:52.734 --rc geninfo_unexecuted_blocks=1 00:39:52.734 00:39:52.734 ' 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:52.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.734 --rc genhtml_branch_coverage=1 00:39:52.734 --rc genhtml_function_coverage=1 00:39:52.734 --rc genhtml_legend=1 00:39:52.734 --rc geninfo_all_blocks=1 00:39:52.734 --rc geninfo_unexecuted_blocks=1 00:39:52.734 00:39:52.734 ' 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:52.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.734 --rc genhtml_branch_coverage=1 00:39:52.734 --rc genhtml_function_coverage=1 00:39:52.734 --rc genhtml_legend=1 00:39:52.734 --rc geninfo_all_blocks=1 00:39:52.734 --rc geninfo_unexecuted_blocks=1 00:39:52.734 00:39:52.734 ' 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:52.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.734 --rc genhtml_branch_coverage=1 00:39:52.734 --rc genhtml_function_coverage=1 00:39:52.734 --rc genhtml_legend=1 00:39:52.734 --rc geninfo_all_blocks=1 00:39:52.734 --rc geninfo_unexecuted_blocks=1 00:39:52.734 00:39:52.734 ' 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:52.734 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:52.735 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:54.700 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:54.701 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:54.701 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:54.701 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:54.701 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:54.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:54.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:39:54.701 00:39:54.701 --- 10.0.0.2 ping statistics --- 00:39:54.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:54.701 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:54.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:54.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:39:54.701 00:39:54.701 --- 10.0.0.1 ping statistics --- 00:39:54.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:54.701 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=449038 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 449038 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 449038 ']' 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:54.701 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:54.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:54.702 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:54.702 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:54.702 [2024-10-11 23:02:57.819196] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:54.702 [2024-10-11 23:02:57.820324] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:39:54.702 [2024-10-11 23:02:57.820381] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:54.702 [2024-10-11 23:02:57.890001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:54.702 [2024-10-11 23:02:57.937684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:54.702 [2024-10-11 23:02:57.937745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:54.702 [2024-10-11 23:02:57.937774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:54.702 [2024-10-11 23:02:57.937786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:54.702 [2024-10-11 23:02:57.937796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:54.702 [2024-10-11 23:02:57.939402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:54.702 [2024-10-11 23:02:57.939468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:54.702 [2024-10-11 23:02:57.939531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:54.702 [2024-10-11 23:02:57.939535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:54.960 [2024-10-11 23:02:58.022826] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:54.960 [2024-10-11 23:02:58.023064] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:54.960 [2024-10-11 23:02:58.023326] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:54.960 [2024-10-11 23:02:58.023892] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:54.960 [2024-10-11 23:02:58.024123] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:54.960 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:54.960 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:39:54.960 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:54.960 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:54.960 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:54.961 [2024-10-11 23:02:58.076247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:54.961 Malloc0 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:54.961 [2024-10-11 23:02:58.144517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:54.961 { 00:39:54.961 "params": { 00:39:54.961 "name": "Nvme$subsystem", 00:39:54.961 "trtype": "$TEST_TRANSPORT", 00:39:54.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:54.961 "adrfam": "ipv4", 00:39:54.961 "trsvcid": "$NVMF_PORT", 00:39:54.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:54.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:54.961 "hdgst": ${hdgst:-false}, 00:39:54.961 "ddgst": ${ddgst:-false} 00:39:54.961 }, 00:39:54.961 "method": "bdev_nvme_attach_controller" 00:39:54.961 } 00:39:54.961 EOF 00:39:54.961 )") 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:39:54.961 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:54.961 "params": { 00:39:54.961 "name": "Nvme1", 00:39:54.961 "trtype": "tcp", 00:39:54.961 "traddr": "10.0.0.2", 00:39:54.961 "adrfam": "ipv4", 00:39:54.961 "trsvcid": "4420", 00:39:54.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:54.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:54.961 "hdgst": false, 00:39:54.961 "ddgst": false 00:39:54.961 }, 00:39:54.961 "method": "bdev_nvme_attach_controller" 00:39:54.961 }' 00:39:54.961 [2024-10-11 23:02:58.191512] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:39:54.961 [2024-10-11 23:02:58.191614] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449066 ] 00:39:55.219 [2024-10-11 23:02:58.253707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:55.219 [2024-10-11 23:02:58.303599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:55.219 [2024-10-11 23:02:58.303649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:55.219 [2024-10-11 23:02:58.303653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.478 I/O targets: 00:39:55.478 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:55.478 00:39:55.478 00:39:55.478 CUnit - A unit testing framework for C - Version 2.1-3 00:39:55.478 http://cunit.sourceforge.net/ 00:39:55.478 00:39:55.478 00:39:55.478 Suite: bdevio tests on: Nvme1n1 00:39:55.478 Test: blockdev write read block ...passed 00:39:55.478 Test: blockdev write zeroes read block ...passed 00:39:55.478 Test: blockdev write zeroes read no split ...passed 00:39:55.478 Test: blockdev write zeroes read split ...passed 00:39:55.478 Test: blockdev write zeroes read split partial ...passed 00:39:55.478 Test: blockdev reset ...[2024-10-11 23:02:58.658293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:55.478 [2024-10-11 23:02:58.658421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bdb80 (9): Bad file descriptor 00:39:55.478 [2024-10-11 23:02:58.669688] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:55.478 passed 00:39:55.478 Test: blockdev write read 8 blocks ...passed 00:39:55.478 Test: blockdev write read size > 128k ...passed 00:39:55.478 Test: blockdev write read invalid size ...passed 00:39:55.737 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:55.737 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:55.737 Test: blockdev write read max offset ...passed 00:39:55.737 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:55.737 Test: blockdev writev readv 8 blocks ...passed 00:39:55.737 Test: blockdev writev readv 30 x 1block ...passed 00:39:55.737 Test: blockdev writev readv block ...passed 00:39:55.737 Test: blockdev writev readv size > 128k ...passed 00:39:55.737 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:55.737 Test: blockdev comparev and writev ...[2024-10-11 23:02:58.966033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:55.737 [2024-10-11 23:02:58.966070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:55.737 [2024-10-11 23:02:58.966096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:55.737 [2024-10-11 23:02:58.966113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:55.737 [2024-10-11 23:02:58.966508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:55.737 [2024-10-11 23:02:58.966533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:55.737 [2024-10-11 23:02:58.966564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:55.737 [2024-10-11 23:02:58.966584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:55.737 [2024-10-11 23:02:58.966982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:55.737 [2024-10-11 23:02:58.967006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:55.737 [2024-10-11 23:02:58.967028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:55.737 [2024-10-11 23:02:58.967044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:55.737 [2024-10-11 23:02:58.967432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:55.737 [2024-10-11 23:02:58.967457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:55.737 [2024-10-11 23:02:58.967479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:55.737 [2024-10-11 23:02:58.967495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:55.997 passed 00:39:55.997 Test: blockdev nvme passthru rw ...passed 00:39:55.997 Test: blockdev nvme passthru vendor specific ...[2024-10-11 23:02:59.051848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:55.997 [2024-10-11 23:02:59.051877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:55.997 [2024-10-11 23:02:59.052049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:55.997 [2024-10-11 23:02:59.052073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:55.997 [2024-10-11 23:02:59.052228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:55.997 [2024-10-11 23:02:59.052252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:55.997 [2024-10-11 23:02:59.052409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:55.997 [2024-10-11 23:02:59.052433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:55.997 passed 00:39:55.997 Test: blockdev nvme admin passthru ...passed 00:39:55.997 Test: blockdev copy ...passed 00:39:55.997 00:39:55.997 Run Summary: Type Total Ran Passed Failed Inactive 00:39:55.997 suites 1 1 n/a 0 0 00:39:55.997 tests 23 23 23 0 0 00:39:55.997 asserts 152 152 152 0 n/a 00:39:55.997 00:39:55.997 Elapsed time = 1.117 seconds 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:56.257 rmmod nvme_tcp 00:39:56.257 rmmod nvme_fabrics 00:39:56.257 rmmod nvme_keyring 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 449038 ']' 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 449038 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 449038 ']' 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 449038 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 449038 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 449038' 00:39:56.257 killing process with pid 449038 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 449038 00:39:56.257 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 449038 00:39:56.516 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:56.516 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:56.516 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:56.516 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:56.516 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:39:56.516 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:56.516 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:39:56.516 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:56.516 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:56.516 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:56.516 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:56.516 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:58.423 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:58.423 00:39:58.423 real 0m6.221s 00:39:58.423 user 0m8.268s 00:39:58.423 sys 0m2.427s 00:39:58.423 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:58.423 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:58.423 ************************************ 00:39:58.423 END TEST nvmf_bdevio 00:39:58.423 ************************************ 00:39:58.423 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:58.423 00:39:58.423 real 3m53.633s 00:39:58.423 user 8m49.670s 00:39:58.423 sys 1m24.441s 00:39:58.423 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:58.423 23:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:58.423 ************************************ 00:39:58.423 END TEST nvmf_target_core_interrupt_mode 00:39:58.423 ************************************ 00:39:58.682 23:03:01 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:58.682 23:03:01 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:58.682 23:03:01 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:58.682 23:03:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:58.682 ************************************ 00:39:58.682 START TEST nvmf_interrupt 00:39:58.682 ************************************ 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:58.682 * Looking for test storage... 00:39:58.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:58.682 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:58.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:58.683 --rc genhtml_branch_coverage=1 00:39:58.683 --rc genhtml_function_coverage=1 00:39:58.683 --rc genhtml_legend=1 00:39:58.683 --rc geninfo_all_blocks=1 00:39:58.683 --rc geninfo_unexecuted_blocks=1 00:39:58.683 00:39:58.683 ' 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:58.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:58.683 --rc genhtml_branch_coverage=1 00:39:58.683 --rc genhtml_function_coverage=1 00:39:58.683 --rc genhtml_legend=1 00:39:58.683 --rc geninfo_all_blocks=1 00:39:58.683 --rc geninfo_unexecuted_blocks=1 00:39:58.683 00:39:58.683 ' 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:58.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:58.683 --rc genhtml_branch_coverage=1 00:39:58.683 --rc genhtml_function_coverage=1 00:39:58.683 --rc genhtml_legend=1 00:39:58.683 --rc geninfo_all_blocks=1 00:39:58.683 --rc geninfo_unexecuted_blocks=1 00:39:58.683 00:39:58.683 ' 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:58.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:58.683 --rc genhtml_branch_coverage=1 00:39:58.683 --rc genhtml_function_coverage=1 00:39:58.683 --rc genhtml_legend=1 00:39:58.683 --rc geninfo_all_blocks=1 00:39:58.683 --rc geninfo_unexecuted_blocks=1 00:39:58.683 00:39:58.683 ' 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:58.683 23:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:01.218 23:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:01.218 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:01.218 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:01.219 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:01.219 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:01.219 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:01.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:01.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:40:01.219 00:40:01.219 --- 10.0.0.2 ping statistics --- 00:40:01.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:01.219 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:01.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:01.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:40:01.219 00:40:01.219 --- 10.0.0.1 ping statistics --- 00:40:01.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:01.219 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=451266 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 451266 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 451266 ']' 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:01.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:01.219 [2024-10-11 23:03:04.218241] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:01.219 [2024-10-11 23:03:04.219375] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:40:01.219 [2024-10-11 23:03:04.219441] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:01.219 [2024-10-11 23:03:04.284749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:01.219 [2024-10-11 23:03:04.330581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:01.219 [2024-10-11 23:03:04.330646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:01.219 [2024-10-11 23:03:04.330662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:01.219 [2024-10-11 23:03:04.330677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:01.219 [2024-10-11 23:03:04.330688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:01.219 [2024-10-11 23:03:04.332048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:01.219 [2024-10-11 23:03:04.332054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:01.219 [2024-10-11 23:03:04.420624] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:01.219 [2024-10-11 23:03:04.420659] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:01.219 [2024-10-11 23:03:04.420940] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:01.219 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:01.478 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:01.478 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:01.478 5000+0 records in 00:40:01.478 5000+0 records out 00:40:01.478 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0146574 s, 699 MB/s 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:01.479 AIO0 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:01.479 [2024-10-11 23:03:04.536708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:01.479 [2024-10-11 23:03:04.560915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 451266 0 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 451266 0 idle 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=451266 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 451266 -w 256 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 451266 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.25 reactor_0' 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 451266 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.25 reactor_0 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 451266 1 00:40:01.479 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 451266 1 idle 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=451266 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 451266 -w 256 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 451271 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.00 reactor_1' 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 451271 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.00 reactor_1 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=451429 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 451266 0 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 451266 0 busy 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=451266 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 451266 -w 256 00:40:01.738 23:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 451266 root 20 0 128.2g 47232 33792 R 99.9 0.1 0:00.47 reactor_0' 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 451266 root 20 0 128.2g 47232 33792 R 99.9 0.1 0:00.47 reactor_0 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 451266 1 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 451266 1 busy 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=451266 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 451266 -w 256 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 451271 root 20 0 128.2g 47232 33792 R 99.9 0.1 0:00.27 reactor_1' 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 451271 root 20 0 128.2g 47232 33792 R 99.9 0.1 0:00.27 reactor_1 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:01.996 23:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 451429 00:40:11.971 Initializing NVMe Controllers 00:40:11.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:11.971 Controller IO queue size 256, less than required. 00:40:11.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:11.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:11.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:11.971 Initialization complete. Launching workers. 00:40:11.971 ======================================================== 00:40:11.971 Latency(us) 00:40:11.971 Device Information : IOPS MiB/s Average min max 00:40:11.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13295.00 51.93 19270.04 3858.80 23668.07 00:40:11.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13311.60 52.00 19245.03 4080.14 60944.41 00:40:11.971 ======================================================== 00:40:11.971 Total : 26606.60 103.93 19257.53 3858.80 60944.41 00:40:11.971 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 451266 0 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 451266 0 idle 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=451266 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 451266 -w 256 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 451266 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:20.19 reactor_0' 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 451266 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:20.19 reactor_0 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 451266 1 00:40:11.971 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 451266 1 idle 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=451266 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 451266 -w 256 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 451271 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:09.98 reactor_1' 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 451271 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:09.98 reactor_1 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:12.231 23:03:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:12.491 23:03:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:12.491 23:03:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:40:12.491 23:03:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:40:12.491 23:03:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:40:12.491 23:03:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 451266 0 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 451266 0 idle 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=451266 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 451266 -w 256 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 451266 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:20.29 reactor_0' 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 451266 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:20.29 reactor_0 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:15.030 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 451266 1 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 451266 1 idle 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=451266 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 451266 -w 256 00:40:15.031 23:03:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 451271 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:10.01 reactor_1' 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 451271 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:10.01 reactor_1 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:15.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:15.031 rmmod nvme_tcp 00:40:15.031 rmmod nvme_fabrics 00:40:15.031 rmmod nvme_keyring 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 451266 ']' 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 451266 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 451266 ']' 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 451266 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:15.031 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 451266 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 451266' 00:40:15.289 killing process with pid 451266 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 451266 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 451266 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:15.289 23:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.819 23:03:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:17.819 00:40:17.819 real 0m18.818s 00:40:17.819 user 0m36.272s 00:40:17.819 sys 0m7.152s 00:40:17.819 23:03:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:17.819 23:03:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:17.819 ************************************ 00:40:17.819 END TEST nvmf_interrupt 00:40:17.819 ************************************ 00:40:17.819 00:40:17.819 real 32m48.268s 00:40:17.819 user 87m13.483s 00:40:17.819 sys 7m56.707s 00:40:17.819 23:03:20 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:17.819 23:03:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:17.819 ************************************ 00:40:17.819 END TEST nvmf_tcp 00:40:17.819 ************************************ 00:40:17.819 23:03:20 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:40:17.819 23:03:20 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:17.819 23:03:20 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:17.819 23:03:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:17.819 23:03:20 -- common/autotest_common.sh@10 -- # set +x 00:40:17.819 ************************************ 00:40:17.819 START TEST spdkcli_nvmf_tcp 00:40:17.819 ************************************ 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:17.819 * Looking for test storage... 00:40:17.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:17.819 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:17.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.819 --rc genhtml_branch_coverage=1 00:40:17.819 --rc genhtml_function_coverage=1 00:40:17.819 --rc genhtml_legend=1 00:40:17.819 --rc geninfo_all_blocks=1 00:40:17.819 --rc geninfo_unexecuted_blocks=1 00:40:17.819 00:40:17.819 ' 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:17.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.820 --rc genhtml_branch_coverage=1 00:40:17.820 --rc genhtml_function_coverage=1 00:40:17.820 --rc genhtml_legend=1 00:40:17.820 --rc geninfo_all_blocks=1 00:40:17.820 --rc geninfo_unexecuted_blocks=1 00:40:17.820 00:40:17.820 ' 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:17.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.820 --rc genhtml_branch_coverage=1 00:40:17.820 --rc genhtml_function_coverage=1 00:40:17.820 --rc genhtml_legend=1 00:40:17.820 --rc geninfo_all_blocks=1 00:40:17.820 --rc geninfo_unexecuted_blocks=1 00:40:17.820 00:40:17.820 ' 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:17.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.820 --rc genhtml_branch_coverage=1 00:40:17.820 --rc genhtml_function_coverage=1 00:40:17.820 --rc genhtml_legend=1 00:40:17.820 --rc geninfo_all_blocks=1 00:40:17.820 --rc geninfo_unexecuted_blocks=1 00:40:17.820 00:40:17.820 ' 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:17.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=453928 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 453928 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 453928 ']' 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:17.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:17.820 23:03:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:17.820 [2024-10-11 23:03:20.822812] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:40:17.820 [2024-10-11 23:03:20.822904] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453928 ] 00:40:17.820 [2024-10-11 23:03:20.880493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:17.820 [2024-10-11 23:03:20.928663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:17.820 [2024-10-11 23:03:20.928667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:17.820 23:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:17.820 23:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:40:17.820 23:03:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:17.820 23:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:17.820 23:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:17.820 23:03:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:17.820 23:03:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:17.820 23:03:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:17.820 23:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:17.820 23:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:17.820 23:03:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:17.820 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:17.820 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:17.820 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:17.820 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:17.820 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:17.820 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:17.820 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:17.820 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:17.820 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:17.820 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:17.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:17.821 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:17.821 ' 00:40:21.115 [2024-10-11 23:03:23.760347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:22.054 [2024-10-11 23:03:25.028693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:24.592 [2024-10-11 23:03:27.371909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:26.498 [2024-10-11 23:03:29.378085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:27.878 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:27.878 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:27.878 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:27.878 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:27.878 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:27.878 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:27.878 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:27.878 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:27.878 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:27.878 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:27.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:27.878 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:27.878 23:03:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:27.878 23:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:27.878 23:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.878 23:03:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:27.878 23:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:27.878 23:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.878 23:03:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:27.878 23:03:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:28.445 23:03:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:28.445 23:03:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:28.445 23:03:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:28.445 23:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:28.445 23:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:28.445 23:03:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:28.445 23:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:28.445 23:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:28.445 23:03:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:28.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:28.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:28.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:28.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:28.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:28.445 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:28.445 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:28.445 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:28.445 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:28.445 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:28.445 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:28.445 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:28.445 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:28.445 ' 00:40:33.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:33.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:33.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:33.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:33.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:33.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:33.723 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:33.723 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:33.723 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:33.723 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:33.723 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:33.723 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:33.723 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:33.723 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:33.723 23:03:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:33.723 23:03:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:33.723 23:03:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 453928 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 453928 ']' 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 453928 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 453928 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 453928' 00:40:33.982 killing process with pid 453928 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 453928 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 453928 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 453928 ']' 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 453928 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 453928 ']' 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 453928 00:40:33.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (453928) - No such process 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 453928 is not found' 00:40:33.982 Process with pid 453928 is not found 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:33.982 00:40:33.982 real 0m16.626s 00:40:33.982 user 0m35.498s 00:40:33.982 sys 0m0.821s 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:33.982 23:03:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:33.982 ************************************ 00:40:33.982 END TEST spdkcli_nvmf_tcp 00:40:33.982 ************************************ 00:40:34.242 23:03:37 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:34.242 23:03:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:34.242 23:03:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:34.242 23:03:37 -- common/autotest_common.sh@10 -- # set +x 00:40:34.242 ************************************ 00:40:34.242 START TEST nvmf_identify_passthru 00:40:34.242 ************************************ 00:40:34.242 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:34.242 * Looking for test storage... 00:40:34.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:34.242 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:34.242 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:40:34.242 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:34.242 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:34.242 23:03:37 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:34.242 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:34.242 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:34.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.242 --rc genhtml_branch_coverage=1 00:40:34.242 --rc genhtml_function_coverage=1 00:40:34.242 --rc genhtml_legend=1 00:40:34.242 --rc geninfo_all_blocks=1 00:40:34.242 --rc geninfo_unexecuted_blocks=1 00:40:34.242 00:40:34.242 ' 00:40:34.242 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:34.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.242 --rc genhtml_branch_coverage=1 00:40:34.242 --rc genhtml_function_coverage=1 00:40:34.242 --rc genhtml_legend=1 00:40:34.242 --rc geninfo_all_blocks=1 00:40:34.242 --rc geninfo_unexecuted_blocks=1 00:40:34.242 00:40:34.242 ' 00:40:34.242 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:34.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.242 --rc genhtml_branch_coverage=1 00:40:34.242 --rc genhtml_function_coverage=1 00:40:34.242 --rc genhtml_legend=1 00:40:34.242 --rc geninfo_all_blocks=1 00:40:34.242 --rc geninfo_unexecuted_blocks=1 00:40:34.242 00:40:34.242 ' 00:40:34.242 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:34.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.242 --rc genhtml_branch_coverage=1 00:40:34.242 --rc genhtml_function_coverage=1 00:40:34.242 --rc genhtml_legend=1 00:40:34.242 --rc geninfo_all_blocks=1 00:40:34.242 --rc geninfo_unexecuted_blocks=1 00:40:34.242 00:40:34.242 ' 00:40:34.242 23:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:34.242 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:34.243 23:03:37 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:34.243 23:03:37 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:34.243 23:03:37 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:34.243 23:03:37 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:34.243 23:03:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.243 23:03:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.243 23:03:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.243 23:03:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:34.243 23:03:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:34.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:34.243 23:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:34.243 23:03:37 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:34.243 23:03:37 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:34.243 23:03:37 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:34.243 23:03:37 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:34.243 23:03:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.243 23:03:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.243 23:03:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.243 23:03:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:34.243 23:03:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.243 23:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.243 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:34.243 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:34.243 23:03:37 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:34.243 23:03:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:36.782 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:36.782 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:36.782 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:36.782 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:36.782 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:36.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:36.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:40:36.783 00:40:36.783 --- 10.0.0.2 ping statistics --- 00:40:36.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:36.783 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:36.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:36.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:40:36.783 00:40:36.783 --- 10.0.0.1 ping statistics --- 00:40:36.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:36.783 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:36.783 23:03:39 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:36.783 23:03:39 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:36.783 23:03:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:40:36.783 23:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:88:00.0 00:40:36.783 23:03:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:36.783 23:03:39 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:36.783 23:03:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:36.783 23:03:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:36.783 23:03:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:40.980 23:03:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:40.980 23:03:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:40.980 23:03:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:40.980 23:03:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:45.172 23:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:45.172 23:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:45.172 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:45.172 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:45.172 23:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:45.172 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:45.172 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:45.172 23:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=458549 00:40:45.172 23:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:45.172 23:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:45.172 23:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 458549 00:40:45.172 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 458549 ']' 00:40:45.172 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:45.172 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:45.172 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:45.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:45.172 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:45.172 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:45.172 [2024-10-11 23:03:48.247260] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:40:45.172 [2024-10-11 23:03:48.247355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:45.172 [2024-10-11 23:03:48.313679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:45.172 [2024-10-11 23:03:48.364487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:45.172 [2024-10-11 23:03:48.364540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:45.172 [2024-10-11 23:03:48.364576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:45.172 [2024-10-11 23:03:48.364589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:45.172 [2024-10-11 23:03:48.364598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:45.172 [2024-10-11 23:03:48.366207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:45.172 [2024-10-11 23:03:48.366272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:45.172 [2024-10-11 23:03:48.366342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:45.172 [2024-10-11 23:03:48.366340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:45.433 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:45.433 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:40:45.433 23:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:45.433 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.433 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:45.433 INFO: Log level set to 20 00:40:45.433 INFO: Requests: 00:40:45.433 { 00:40:45.433 "jsonrpc": "2.0", 00:40:45.433 "method": "nvmf_set_config", 00:40:45.433 "id": 1, 00:40:45.433 "params": { 00:40:45.433 "admin_cmd_passthru": { 00:40:45.433 "identify_ctrlr": true 00:40:45.433 } 00:40:45.433 } 00:40:45.433 } 00:40:45.433 00:40:45.433 INFO: response: 00:40:45.433 { 00:40:45.433 "jsonrpc": "2.0", 00:40:45.433 "id": 1, 00:40:45.433 "result": true 00:40:45.433 } 00:40:45.433 00:40:45.433 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.433 23:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:45.433 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.433 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:45.433 INFO: Setting log level to 20 00:40:45.433 INFO: Setting log level to 20 00:40:45.433 INFO: Log level set to 20 00:40:45.433 INFO: Log level set to 20 00:40:45.433 INFO: Requests: 00:40:45.433 { 00:40:45.433 "jsonrpc": "2.0", 00:40:45.433 "method": "framework_start_init", 00:40:45.433 "id": 1 00:40:45.433 } 00:40:45.433 00:40:45.433 INFO: Requests: 00:40:45.433 { 00:40:45.433 "jsonrpc": "2.0", 00:40:45.434 "method": "framework_start_init", 00:40:45.434 "id": 1 00:40:45.434 } 00:40:45.434 00:40:45.434 [2024-10-11 23:03:48.583503] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:45.434 INFO: response: 00:40:45.434 { 00:40:45.434 "jsonrpc": "2.0", 00:40:45.434 "id": 1, 00:40:45.434 "result": true 00:40:45.434 } 00:40:45.434 00:40:45.434 INFO: response: 00:40:45.434 { 00:40:45.434 "jsonrpc": "2.0", 00:40:45.434 "id": 1, 00:40:45.434 "result": true 00:40:45.434 } 00:40:45.434 00:40:45.434 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.434 23:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:45.434 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.434 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:45.434 INFO: Setting log level to 40 00:40:45.434 INFO: Setting log level to 40 00:40:45.434 INFO: Setting log level to 40 00:40:45.434 [2024-10-11 23:03:48.593701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:45.434 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.434 23:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:45.434 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:45.434 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:45.434 23:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:45.434 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.434 23:03:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.719 Nvme0n1 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.719 [2024-10-11 23:03:51.498438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.719 [ 00:40:48.719 { 00:40:48.719 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:48.719 "subtype": "Discovery", 00:40:48.719 "listen_addresses": [], 00:40:48.719 "allow_any_host": true, 00:40:48.719 "hosts": [] 00:40:48.719 }, 00:40:48.719 { 00:40:48.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:48.719 "subtype": "NVMe", 00:40:48.719 "listen_addresses": [ 00:40:48.719 { 00:40:48.719 "trtype": "TCP", 00:40:48.719 "adrfam": "IPv4", 00:40:48.719 "traddr": "10.0.0.2", 00:40:48.719 "trsvcid": "4420" 00:40:48.719 } 00:40:48.719 ], 00:40:48.719 "allow_any_host": true, 00:40:48.719 "hosts": [], 00:40:48.719 "serial_number": "SPDK00000000000001", 00:40:48.719 "model_number": "SPDK bdev Controller", 00:40:48.719 "max_namespaces": 1, 00:40:48.719 "min_cntlid": 1, 00:40:48.719 "max_cntlid": 65519, 00:40:48.719 "namespaces": [ 00:40:48.719 { 00:40:48.719 "nsid": 1, 00:40:48.719 "bdev_name": "Nvme0n1", 00:40:48.719 "name": "Nvme0n1", 00:40:48.719 "nguid": "684131DD423846909379E875410E6283", 00:40:48.719 "uuid": "684131dd-4238-4690-9379-e875410e6283" 00:40:48.719 } 00:40:48.719 ] 00:40:48.719 } 00:40:48.719 ] 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:48.719 23:03:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:48.719 23:03:51 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:48.719 23:03:51 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:48.719 23:03:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:48.719 23:03:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:48.719 23:03:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:48.719 23:03:51 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:48.719 rmmod nvme_tcp 00:40:48.719 rmmod nvme_fabrics 00:40:48.719 rmmod nvme_keyring 00:40:48.719 23:03:51 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:48.719 23:03:51 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:48.719 23:03:51 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:48.719 23:03:51 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 458549 ']' 00:40:48.719 23:03:51 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 458549 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 458549 ']' 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 458549 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 458549 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 458549' 00:40:48.719 killing process with pid 458549 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 458549 00:40:48.719 23:03:51 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 458549 00:40:50.624 23:03:53 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:50.624 23:03:53 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:50.624 23:03:53 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:50.624 23:03:53 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:50.624 23:03:53 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:40:50.624 23:03:53 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:50.624 23:03:53 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:40:50.624 23:03:53 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:50.624 23:03:53 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:50.624 23:03:53 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:50.624 23:03:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:50.624 23:03:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:52.533 23:03:55 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:52.533 00:40:52.533 real 0m18.259s 00:40:52.533 user 0m27.235s 00:40:52.533 sys 0m2.334s 00:40:52.533 23:03:55 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:52.533 23:03:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.533 ************************************ 00:40:52.533 END TEST nvmf_identify_passthru 00:40:52.533 ************************************ 00:40:52.533 23:03:55 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:52.533 23:03:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:52.533 23:03:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:52.533 23:03:55 -- common/autotest_common.sh@10 -- # set +x 00:40:52.533 ************************************ 00:40:52.533 START TEST nvmf_dif 00:40:52.533 ************************************ 00:40:52.533 23:03:55 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:52.533 * Looking for test storage... 00:40:52.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:52.533 23:03:55 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:52.533 23:03:55 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:40:52.533 23:03:55 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:52.533 23:03:55 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:52.533 23:03:55 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:52.533 23:03:55 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:52.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.533 --rc genhtml_branch_coverage=1 00:40:52.533 --rc genhtml_function_coverage=1 00:40:52.533 --rc genhtml_legend=1 00:40:52.533 --rc geninfo_all_blocks=1 00:40:52.533 --rc geninfo_unexecuted_blocks=1 00:40:52.533 00:40:52.533 ' 00:40:52.533 23:03:55 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:52.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.533 --rc genhtml_branch_coverage=1 00:40:52.533 --rc genhtml_function_coverage=1 00:40:52.533 --rc genhtml_legend=1 00:40:52.533 --rc geninfo_all_blocks=1 00:40:52.533 --rc geninfo_unexecuted_blocks=1 00:40:52.533 00:40:52.533 ' 00:40:52.533 23:03:55 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:52.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.533 --rc genhtml_branch_coverage=1 00:40:52.533 --rc genhtml_function_coverage=1 00:40:52.533 --rc genhtml_legend=1 00:40:52.533 --rc geninfo_all_blocks=1 00:40:52.533 --rc geninfo_unexecuted_blocks=1 00:40:52.533 00:40:52.533 ' 00:40:52.533 23:03:55 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:52.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.533 --rc genhtml_branch_coverage=1 00:40:52.533 --rc genhtml_function_coverage=1 00:40:52.533 --rc genhtml_legend=1 00:40:52.533 --rc geninfo_all_blocks=1 00:40:52.533 --rc geninfo_unexecuted_blocks=1 00:40:52.533 00:40:52.533 ' 00:40:52.533 23:03:55 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:52.533 23:03:55 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:52.533 23:03:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.533 23:03:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.533 23:03:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.533 23:03:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:52.533 23:03:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:52.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:52.533 23:03:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:52.533 23:03:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:52.533 23:03:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:52.533 23:03:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:52.533 23:03:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:52.533 23:03:55 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:52.534 23:03:55 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:52.534 23:03:55 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:52.534 23:03:55 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:52.534 23:03:55 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:52.534 23:03:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:52.534 23:03:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:52.534 23:03:55 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:52.534 23:03:55 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:52.534 23:03:55 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:52.534 23:03:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:55.064 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:55.064 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:55.064 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:55.064 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:55.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:55.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:40:55.064 00:40:55.064 --- 10.0.0.2 ping statistics --- 00:40:55.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:55.064 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:55.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:55.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:40:55.064 00:40:55.064 --- 10.0.0.1 ping statistics --- 00:40:55.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:55.064 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:40:55.064 23:03:57 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:56.013 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:56.013 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:56.013 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:56.013 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:56.013 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:56.013 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:56.013 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:56.013 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:56.013 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:56.013 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:56.013 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:56.013 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:56.013 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:56.013 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:56.013 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:56.013 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:56.013 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:56.271 23:03:59 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:56.271 23:03:59 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:56.271 23:03:59 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:56.271 23:03:59 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:56.271 23:03:59 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:56.271 23:03:59 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:56.271 23:03:59 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:56.271 23:03:59 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:56.271 23:03:59 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:56.271 23:03:59 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:56.271 23:03:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:56.271 23:03:59 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=461702 00:40:56.271 23:03:59 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:56.271 23:03:59 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 461702 00:40:56.271 23:03:59 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 461702 ']' 00:40:56.271 23:03:59 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:56.271 23:03:59 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:56.271 23:03:59 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:56.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:56.271 23:03:59 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:56.271 23:03:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:56.271 [2024-10-11 23:03:59.357811] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:40:56.271 [2024-10-11 23:03:59.357895] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:56.271 [2024-10-11 23:03:59.424062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:56.271 [2024-10-11 23:03:59.473722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:56.271 [2024-10-11 23:03:59.473766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:56.271 [2024-10-11 23:03:59.473780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:56.271 [2024-10-11 23:03:59.473793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:56.271 [2024-10-11 23:03:59.473804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:56.271 [2024-10-11 23:03:59.474339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:56.530 23:03:59 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:56.530 23:03:59 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:40:56.530 23:03:59 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:56.530 23:03:59 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:56.530 23:03:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:56.530 23:03:59 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:56.530 23:03:59 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:56.530 23:03:59 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:56.530 23:03:59 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.530 23:03:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:56.530 [2024-10-11 23:03:59.614349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:56.530 23:03:59 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.530 23:03:59 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:56.530 23:03:59 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:56.530 23:03:59 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:56.530 23:03:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:56.530 ************************************ 00:40:56.530 START TEST fio_dif_1_default 00:40:56.530 ************************************ 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:56.530 bdev_null0 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:56.530 [2024-10-11 23:03:59.670665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:56.530 { 00:40:56.530 "params": { 00:40:56.530 "name": "Nvme$subsystem", 00:40:56.530 "trtype": "$TEST_TRANSPORT", 00:40:56.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:56.530 "adrfam": "ipv4", 00:40:56.530 "trsvcid": "$NVMF_PORT", 00:40:56.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:56.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:56.530 "hdgst": ${hdgst:-false}, 00:40:56.530 "ddgst": ${ddgst:-false} 00:40:56.530 }, 00:40:56.530 "method": "bdev_nvme_attach_controller" 00:40:56.530 } 00:40:56.530 EOF 00:40:56.530 )") 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:56.530 "params": { 00:40:56.530 "name": "Nvme0", 00:40:56.530 "trtype": "tcp", 00:40:56.530 "traddr": "10.0.0.2", 00:40:56.530 "adrfam": "ipv4", 00:40:56.530 "trsvcid": "4420", 00:40:56.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:56.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:56.530 "hdgst": false, 00:40:56.530 "ddgst": false 00:40:56.530 }, 00:40:56.530 "method": "bdev_nvme_attach_controller" 00:40:56.530 }' 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:56.530 23:03:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:56.788 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:56.788 fio-3.35 00:40:56.788 Starting 1 thread 00:41:08.994 00:41:08.994 filename0: (groupid=0, jobs=1): err= 0: pid=461933: Fri Oct 11 23:04:10 2024 00:41:08.994 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10011msec) 00:41:08.994 slat (usec): min=4, max=101, avg= 9.80, stdev= 4.25 00:41:08.994 clat (usec): min=640, max=46916, avg=40830.38, stdev=2601.21 00:41:08.994 lat (usec): min=648, max=46940, avg=40840.18, stdev=2599.83 00:41:08.994 clat percentiles (usec): 00:41:08.994 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:08.994 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:08.994 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:08.994 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:41:08.994 | 99.99th=[46924] 00:41:08.994 bw ( KiB/s): min= 384, max= 416, per=99.60%, avg=390.40, stdev=13.13, samples=20 00:41:08.994 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:41:08.994 lat (usec) : 750=0.41% 00:41:08.994 lat (msec) : 50=99.59% 00:41:08.994 cpu : usr=90.97%, sys=8.74%, ctx=14, majf=0, minf=241 00:41:08.994 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.994 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.994 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:08.994 00:41:08.994 Run status group 0 (all jobs): 00:41:08.994 READ: bw=392KiB/s (401kB/s), 392KiB/s-392KiB/s (401kB/s-401kB/s), io=3920KiB (4014kB), run=10011-10011msec 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.994 00:41:08.994 real 0m11.237s 00:41:08.994 user 0m10.330s 00:41:08.994 sys 0m1.196s 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:08.994 ************************************ 00:41:08.994 END TEST fio_dif_1_default 00:41:08.994 ************************************ 00:41:08.994 23:04:10 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:08.994 23:04:10 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:08.994 23:04:10 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:08.994 23:04:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:08.994 ************************************ 00:41:08.994 START TEST fio_dif_1_multi_subsystems 00:41:08.994 ************************************ 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:08.994 bdev_null0 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.994 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:08.994 [2024-10-11 23:04:10.947888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:08.995 bdev_null1 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:08.995 { 00:41:08.995 "params": { 00:41:08.995 "name": "Nvme$subsystem", 00:41:08.995 "trtype": "$TEST_TRANSPORT", 00:41:08.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:08.995 "adrfam": "ipv4", 00:41:08.995 "trsvcid": "$NVMF_PORT", 00:41:08.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:08.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:08.995 "hdgst": ${hdgst:-false}, 00:41:08.995 "ddgst": ${ddgst:-false} 00:41:08.995 }, 00:41:08.995 "method": "bdev_nvme_attach_controller" 00:41:08.995 } 00:41:08.995 EOF 00:41:08.995 )") 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:08.995 { 00:41:08.995 "params": { 00:41:08.995 "name": "Nvme$subsystem", 00:41:08.995 "trtype": "$TEST_TRANSPORT", 00:41:08.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:08.995 "adrfam": "ipv4", 00:41:08.995 "trsvcid": "$NVMF_PORT", 00:41:08.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:08.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:08.995 "hdgst": ${hdgst:-false}, 00:41:08.995 "ddgst": ${ddgst:-false} 00:41:08.995 }, 00:41:08.995 "method": "bdev_nvme_attach_controller" 00:41:08.995 } 00:41:08.995 EOF 00:41:08.995 )") 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:41:08.995 23:04:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:08.995 "params": { 00:41:08.995 "name": "Nvme0", 00:41:08.995 "trtype": "tcp", 00:41:08.995 "traddr": "10.0.0.2", 00:41:08.995 "adrfam": "ipv4", 00:41:08.995 "trsvcid": "4420", 00:41:08.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:08.995 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:08.995 "hdgst": false, 00:41:08.995 "ddgst": false 00:41:08.995 }, 00:41:08.995 "method": "bdev_nvme_attach_controller" 00:41:08.995 },{ 00:41:08.995 "params": { 00:41:08.995 "name": "Nvme1", 00:41:08.995 "trtype": "tcp", 00:41:08.995 "traddr": "10.0.0.2", 00:41:08.995 "adrfam": "ipv4", 00:41:08.995 "trsvcid": "4420", 00:41:08.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:08.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:08.995 "hdgst": false, 00:41:08.995 "ddgst": false 00:41:08.995 }, 00:41:08.995 "method": "bdev_nvme_attach_controller" 00:41:08.995 }' 00:41:08.995 23:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:08.995 23:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:08.995 23:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:08.995 23:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:08.995 23:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:08.995 23:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:08.995 23:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:08.995 23:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:08.995 23:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:08.995 23:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:08.995 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:08.995 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:08.995 fio-3.35 00:41:08.995 Starting 2 threads 00:41:18.966 00:41:18.967 filename0: (groupid=0, jobs=1): err= 0: pid=463332: Fri Oct 11 23:04:21 2024 00:41:18.967 read: IOPS=191, BW=766KiB/s (784kB/s)(7664KiB/10008msec) 00:41:18.967 slat (nsec): min=7346, max=29834, avg=9865.93, stdev=2564.88 00:41:18.967 clat (usec): min=513, max=42464, avg=20861.53, stdev=20363.48 00:41:18.967 lat (usec): min=521, max=42477, avg=20871.40, stdev=20363.29 00:41:18.967 clat percentiles (usec): 00:41:18.967 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 578], 20.00th=[ 594], 00:41:18.967 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 971], 60.00th=[41157], 00:41:18.967 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:18.967 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:18.967 | 99.99th=[42206] 00:41:18.967 bw ( KiB/s): min= 704, max= 768, per=54.46%, avg=764.80, stdev=14.31, samples=20 00:41:18.967 iops : min= 176, max= 192, avg=191.20, stdev= 3.58, samples=20 00:41:18.967 lat (usec) : 750=46.56%, 1000=3.55% 00:41:18.967 lat (msec) : 4=0.21%, 50=49.69% 00:41:18.967 cpu : usr=94.37%, sys=5.30%, ctx=25, majf=0, minf=141 00:41:18.967 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:18.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:18.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:18.967 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:18.967 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:18.967 filename1: (groupid=0, jobs=1): err= 0: pid=463333: Fri Oct 11 23:04:21 2024 00:41:18.967 read: IOPS=159, BW=638KiB/s (653kB/s)(6384KiB/10014msec) 00:41:18.967 slat (nsec): min=4864, max=29577, avg=9983.01, stdev=2718.35 00:41:18.967 clat (usec): min=527, max=43146, avg=25064.90, stdev=19858.58 00:41:18.967 lat (usec): min=535, max=43161, avg=25074.88, stdev=19858.51 00:41:18.967 clat percentiles (usec): 00:41:18.967 | 1.00th=[ 553], 5.00th=[ 586], 10.00th=[ 603], 20.00th=[ 660], 00:41:18.967 | 30.00th=[ 742], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:41:18.967 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:41:18.967 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:41:18.967 | 99.99th=[43254] 00:41:18.967 bw ( KiB/s): min= 384, max= 1376, per=45.34%, avg=636.80, stdev=284.12, samples=20 00:41:18.967 iops : min= 96, max= 344, avg=159.20, stdev=71.03, samples=20 00:41:18.967 lat (usec) : 750=32.14%, 1000=7.33% 00:41:18.967 lat (msec) : 2=0.38%, 50=60.15% 00:41:18.967 cpu : usr=94.26%, sys=5.44%, ctx=14, majf=0, minf=146 00:41:18.967 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:18.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:18.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:18.967 issued rwts: total=1596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:18.967 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:18.967 00:41:18.967 Run status group 0 (all jobs): 00:41:18.967 READ: bw=1403KiB/s (1437kB/s), 638KiB/s-766KiB/s (653kB/s-784kB/s), io=13.7MiB (14.4MB), run=10008-10014msec 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.967 00:41:18.967 real 0m11.136s 00:41:18.967 user 0m20.114s 00:41:18.967 sys 0m1.366s 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:18.967 23:04:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:18.967 ************************************ 00:41:18.967 END TEST fio_dif_1_multi_subsystems 00:41:18.967 ************************************ 00:41:18.967 23:04:22 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:18.967 23:04:22 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:18.967 23:04:22 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:18.967 23:04:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:18.967 ************************************ 00:41:18.967 START TEST fio_dif_rand_params 00:41:18.967 ************************************ 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:18.967 bdev_null0 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:18.967 [2024-10-11 23:04:22.124086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:18.967 { 00:41:18.967 "params": { 00:41:18.967 "name": "Nvme$subsystem", 00:41:18.967 "trtype": "$TEST_TRANSPORT", 00:41:18.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:18.967 "adrfam": "ipv4", 00:41:18.967 "trsvcid": "$NVMF_PORT", 00:41:18.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:18.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:18.967 "hdgst": ${hdgst:-false}, 00:41:18.967 "ddgst": ${ddgst:-false} 00:41:18.967 }, 00:41:18.967 "method": "bdev_nvme_attach_controller" 00:41:18.967 } 00:41:18.967 EOF 00:41:18.967 )") 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:18.967 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:18.968 "params": { 00:41:18.968 "name": "Nvme0", 00:41:18.968 "trtype": "tcp", 00:41:18.968 "traddr": "10.0.0.2", 00:41:18.968 "adrfam": "ipv4", 00:41:18.968 "trsvcid": "4420", 00:41:18.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:18.968 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:18.968 "hdgst": false, 00:41:18.968 "ddgst": false 00:41:18.968 }, 00:41:18.968 "method": "bdev_nvme_attach_controller" 00:41:18.968 }' 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:18.968 23:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:19.225 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:19.225 ... 00:41:19.225 fio-3.35 00:41:19.225 Starting 3 threads 00:41:25.812 00:41:25.812 filename0: (groupid=0, jobs=1): err= 0: pid=464717: Fri Oct 11 23:04:27 2024 00:41:25.812 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(142MiB/5010msec) 00:41:25.812 slat (nsec): min=4732, max=39059, avg=15554.78, stdev=3407.35 00:41:25.812 clat (usec): min=4194, max=60258, avg=13256.16, stdev=10148.97 00:41:25.812 lat (usec): min=4211, max=60281, avg=13271.71, stdev=10148.78 00:41:25.812 clat percentiles (usec): 00:41:25.812 | 1.00th=[ 4883], 5.00th=[ 6325], 10.00th=[ 7898], 20.00th=[ 8586], 00:41:25.812 | 30.00th=[ 9765], 40.00th=[10945], 50.00th=[11600], 60.00th=[11863], 00:41:25.812 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13304], 95.00th=[48497], 00:41:25.812 | 99.00th=[52167], 99.50th=[52691], 99.90th=[58983], 99.95th=[60031], 00:41:25.812 | 99.99th=[60031] 00:41:25.812 bw ( KiB/s): min=21248, max=38144, per=33.84%, avg=28902.40, stdev=4955.88, samples=10 00:41:25.812 iops : min= 166, max= 298, avg=225.80, stdev=38.72, samples=10 00:41:25.812 lat (msec) : 10=31.10%, 20=62.28%, 50=2.74%, 100=3.89% 00:41:25.812 cpu : usr=91.26%, sys=6.89%, ctx=283, majf=0, minf=97 00:41:25.812 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:25.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:25.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:25.812 issued rwts: total=1132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:25.812 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:25.812 filename0: (groupid=0, jobs=1): err= 0: pid=464718: Fri Oct 11 23:04:27 2024 00:41:25.812 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(147MiB/5025msec) 00:41:25.812 slat (nsec): min=4676, max=46780, avg=15379.89, stdev=4190.44 00:41:25.812 clat (usec): min=4616, max=55190, avg=12784.25, stdev=8793.63 00:41:25.812 lat (usec): min=4629, max=55206, avg=12799.63, stdev=8793.81 00:41:25.812 clat percentiles (usec): 00:41:25.812 | 1.00th=[ 5014], 5.00th=[ 6194], 10.00th=[ 7963], 20.00th=[ 8586], 00:41:25.812 | 30.00th=[ 9372], 40.00th=[10945], 50.00th=[11731], 60.00th=[12256], 00:41:25.812 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13960], 95.00th=[17171], 00:41:25.812 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54789], 99.95th=[55313], 00:41:25.812 | 99.99th=[55313] 00:41:25.812 bw ( KiB/s): min=23040, max=36352, per=35.19%, avg=30054.40, stdev=4121.17, samples=10 00:41:25.812 iops : min= 180, max= 284, avg=234.80, stdev=32.20, samples=10 00:41:25.812 lat (msec) : 10=33.98%, 20=61.17%, 50=1.87%, 100=2.97% 00:41:25.812 cpu : usr=90.43%, sys=7.68%, ctx=200, majf=0, minf=138 00:41:25.812 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:25.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:25.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:25.812 issued rwts: total=1177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:25.812 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:25.812 filename0: (groupid=0, jobs=1): err= 0: pid=464719: Fri Oct 11 23:04:27 2024 00:41:25.812 read: IOPS=208, BW=26.1MiB/s (27.3MB/s)(131MiB/5033msec) 00:41:25.812 slat (nsec): min=6223, max=28591, avg=13990.31, stdev=1916.93 00:41:25.812 clat (usec): min=4450, max=93948, avg=14373.69, stdev=11020.14 00:41:25.812 lat (usec): min=4463, max=93964, avg=14387.68, stdev=11020.00 00:41:25.812 clat percentiles (usec): 00:41:25.812 | 1.00th=[ 5080], 5.00th=[ 6521], 10.00th=[ 7767], 20.00th=[ 8717], 00:41:25.812 | 30.00th=[10421], 40.00th=[11600], 50.00th=[12256], 60.00th=[12649], 00:41:25.812 | 70.00th=[13304], 80.00th=[14353], 90.00th=[16319], 95.00th=[48497], 00:41:25.812 | 99.00th=[53740], 99.50th=[54789], 99.90th=[92799], 99.95th=[93848], 00:41:25.812 | 99.99th=[93848] 00:41:25.812 bw ( KiB/s): min=18432, max=32000, per=31.35%, avg=26777.60, stdev=4309.46, samples=10 00:41:25.812 iops : min= 144, max= 250, avg=209.20, stdev=33.67, samples=10 00:41:25.812 lat (msec) : 10=28.03%, 20=64.82%, 50=3.34%, 100=3.81% 00:41:25.812 cpu : usr=93.58%, sys=5.96%, ctx=8, majf=0, minf=31 00:41:25.812 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:25.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:25.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:25.812 issued rwts: total=1049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:25.812 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:25.812 00:41:25.812 Run status group 0 (all jobs): 00:41:25.812 READ: bw=83.4MiB/s (87.5MB/s), 26.1MiB/s-29.3MiB/s (27.3MB/s-30.7MB/s), io=420MiB (440MB), run=5010-5033msec 00:41:25.812 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:25.812 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:25.812 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:25.812 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:25.812 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:25.812 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:25.812 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 bdev_null0 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 [2024-10-11 23:04:28.160284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 bdev_null1 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 bdev_null2 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:25.813 { 00:41:25.813 "params": { 00:41:25.813 "name": "Nvme$subsystem", 00:41:25.813 "trtype": "$TEST_TRANSPORT", 00:41:25.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:25.813 "adrfam": "ipv4", 00:41:25.813 "trsvcid": "$NVMF_PORT", 00:41:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:25.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:25.813 "hdgst": ${hdgst:-false}, 00:41:25.813 "ddgst": ${ddgst:-false} 00:41:25.813 }, 00:41:25.813 "method": "bdev_nvme_attach_controller" 00:41:25.813 } 00:41:25.813 EOF 00:41:25.813 )") 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:25.813 { 00:41:25.813 "params": { 00:41:25.813 "name": "Nvme$subsystem", 00:41:25.813 "trtype": "$TEST_TRANSPORT", 00:41:25.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:25.813 "adrfam": "ipv4", 00:41:25.813 "trsvcid": "$NVMF_PORT", 00:41:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:25.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:25.813 "hdgst": ${hdgst:-false}, 00:41:25.813 "ddgst": ${ddgst:-false} 00:41:25.813 }, 00:41:25.813 "method": "bdev_nvme_attach_controller" 00:41:25.813 } 00:41:25.813 EOF 00:41:25.813 )") 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:25.813 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:25.813 { 00:41:25.813 "params": { 00:41:25.813 "name": "Nvme$subsystem", 00:41:25.814 "trtype": "$TEST_TRANSPORT", 00:41:25.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:25.814 "adrfam": "ipv4", 00:41:25.814 "trsvcid": "$NVMF_PORT", 00:41:25.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:25.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:25.814 "hdgst": ${hdgst:-false}, 00:41:25.814 "ddgst": ${ddgst:-false} 00:41:25.814 }, 00:41:25.814 "method": "bdev_nvme_attach_controller" 00:41:25.814 } 00:41:25.814 EOF 00:41:25.814 )") 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:25.814 "params": { 00:41:25.814 "name": "Nvme0", 00:41:25.814 "trtype": "tcp", 00:41:25.814 "traddr": "10.0.0.2", 00:41:25.814 "adrfam": "ipv4", 00:41:25.814 "trsvcid": "4420", 00:41:25.814 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:25.814 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:25.814 "hdgst": false, 00:41:25.814 "ddgst": false 00:41:25.814 }, 00:41:25.814 "method": "bdev_nvme_attach_controller" 00:41:25.814 },{ 00:41:25.814 "params": { 00:41:25.814 "name": "Nvme1", 00:41:25.814 "trtype": "tcp", 00:41:25.814 "traddr": "10.0.0.2", 00:41:25.814 "adrfam": "ipv4", 00:41:25.814 "trsvcid": "4420", 00:41:25.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:25.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:25.814 "hdgst": false, 00:41:25.814 "ddgst": false 00:41:25.814 }, 00:41:25.814 "method": "bdev_nvme_attach_controller" 00:41:25.814 },{ 00:41:25.814 "params": { 00:41:25.814 "name": "Nvme2", 00:41:25.814 "trtype": "tcp", 00:41:25.814 "traddr": "10.0.0.2", 00:41:25.814 "adrfam": "ipv4", 00:41:25.814 "trsvcid": "4420", 00:41:25.814 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:25.814 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:25.814 "hdgst": false, 00:41:25.814 "ddgst": false 00:41:25.814 }, 00:41:25.814 "method": "bdev_nvme_attach_controller" 00:41:25.814 }' 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:25.814 23:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:25.814 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:25.814 ... 00:41:25.814 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:25.814 ... 00:41:25.814 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:25.814 ... 00:41:25.814 fio-3.35 00:41:25.814 Starting 24 threads 00:41:38.097 00:41:38.097 filename0: (groupid=0, jobs=1): err= 0: pid=465586: Fri Oct 11 23:04:39 2024 00:41:38.097 read: IOPS=361, BW=1445KiB/s (1480kB/s)(14.1MiB/10026msec) 00:41:38.097 slat (nsec): min=8085, max=96448, avg=13486.69, stdev=8251.75 00:41:38.097 clat (msec): min=20, max=226, avg=44.17, stdev=36.82 00:41:38.097 lat (msec): min=20, max=226, avg=44.19, stdev=36.82 00:41:38.097 clat percentiles (msec): 00:41:38.097 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:38.097 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.097 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 155], 00:41:38.097 | 99.00th=[ 190], 99.50th=[ 201], 99.90th=[ 226], 99.95th=[ 226], 00:41:38.097 | 99.99th=[ 228] 00:41:38.097 bw ( KiB/s): min= 384, max= 1920, per=4.29%, avg=1443.20, stdev=710.79, samples=20 00:41:38.097 iops : min= 96, max= 480, avg=360.80, stdev=177.70, samples=20 00:41:38.097 lat (msec) : 50=91.28%, 250=8.72% 00:41:38.097 cpu : usr=98.38%, sys=1.23%, ctx=13, majf=0, minf=29 00:41:38.097 IO depths : 1=1.4%, 2=7.0%, 4=23.0%, 8=57.4%, 16=11.1%, 32=0.0%, >=64=0.0% 00:41:38.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.097 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.097 issued rwts: total=3622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.097 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.097 filename0: (groupid=0, jobs=1): err= 0: pid=465587: Fri Oct 11 23:04:39 2024 00:41:38.097 read: IOPS=351, BW=1406KiB/s (1439kB/s)(13.8MiB/10017msec) 00:41:38.097 slat (usec): min=6, max=119, avg=29.34, stdev=18.73 00:41:38.097 clat (msec): min=25, max=275, avg=45.31, stdev=46.99 00:41:38.097 lat (msec): min=25, max=275, avg=45.34, stdev=47.00 00:41:38.097 clat percentiles (msec): 00:41:38.098 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.098 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.098 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 176], 00:41:38.098 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:41:38.098 | 99.99th=[ 275] 00:41:38.098 bw ( KiB/s): min= 256, max= 2048, per=4.17%, avg=1401.60, stdev=743.44, samples=20 00:41:38.098 iops : min= 64, max= 512, avg=350.40, stdev=185.86, samples=20 00:41:38.098 lat (msec) : 50=93.12%, 100=0.91%, 250=4.60%, 500=1.36% 00:41:38.098 cpu : usr=98.04%, sys=1.41%, ctx=45, majf=0, minf=26 00:41:38.098 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:38.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 issued rwts: total=3520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.098 filename0: (groupid=0, jobs=1): err= 0: pid=465588: Fri Oct 11 23:04:39 2024 00:41:38.098 read: IOPS=348, BW=1393KiB/s (1426kB/s)(13.6MiB/10012msec) 00:41:38.098 slat (usec): min=8, max=126, avg=25.22, stdev=19.14 00:41:38.098 clat (msec): min=14, max=329, avg=45.73, stdev=51.06 00:41:38.098 lat (msec): min=14, max=329, avg=45.75, stdev=51.07 00:41:38.098 clat percentiles (msec): 00:41:38.098 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.098 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.098 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 228], 00:41:38.098 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 330], 99.95th=[ 330], 00:41:38.098 | 99.99th=[ 330] 00:41:38.098 bw ( KiB/s): min= 128, max= 1920, per=4.05%, avg=1360.84, stdev=796.18, samples=19 00:41:38.098 iops : min= 32, max= 480, avg=340.21, stdev=199.04, samples=19 00:41:38.098 lat (msec) : 20=0.80%, 50=93.23%, 250=4.13%, 500=1.84% 00:41:38.098 cpu : usr=97.88%, sys=1.49%, ctx=71, majf=0, minf=23 00:41:38.098 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:38.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 issued rwts: total=3486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.098 filename0: (groupid=0, jobs=1): err= 0: pid=465589: Fri Oct 11 23:04:39 2024 00:41:38.098 read: IOPS=347, BW=1389KiB/s (1422kB/s)(13.6MiB/10001msec) 00:41:38.098 slat (usec): min=5, max=131, avg=42.79, stdev=16.83 00:41:38.098 clat (msec): min=26, max=308, avg=45.70, stdev=49.90 00:41:38.098 lat (msec): min=26, max=308, avg=45.74, stdev=49.90 00:41:38.098 clat percentiles (msec): 00:41:38.098 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.098 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.098 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 213], 00:41:38.098 | 99.00th=[ 275], 99.50th=[ 305], 99.90th=[ 309], 99.95th=[ 309], 00:41:38.098 | 99.99th=[ 309] 00:41:38.098 bw ( KiB/s): min= 128, max= 1920, per=4.05%, avg=1360.84, stdev=785.69, samples=19 00:41:38.098 iops : min= 32, max= 480, avg=340.21, stdev=196.42, samples=19 00:41:38.098 lat (msec) : 50=93.55%, 100=0.46%, 250=4.44%, 500=1.56% 00:41:38.098 cpu : usr=97.98%, sys=1.45%, ctx=59, majf=0, minf=38 00:41:38.098 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:38.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 issued rwts: total=3472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.098 filename0: (groupid=0, jobs=1): err= 0: pid=465590: Fri Oct 11 23:04:39 2024 00:41:38.098 read: IOPS=348, BW=1394KiB/s (1428kB/s)(13.6MiB/10006msec) 00:41:38.098 slat (usec): min=8, max=145, avg=68.61, stdev=24.28 00:41:38.098 clat (msec): min=27, max=334, avg=45.27, stdev=47.93 00:41:38.098 lat (msec): min=27, max=334, avg=45.34, stdev=47.93 00:41:38.098 clat percentiles (msec): 00:41:38.098 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.098 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:41:38.098 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 194], 00:41:38.098 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 334], 00:41:38.098 | 99.99th=[ 334] 00:41:38.098 bw ( KiB/s): min= 256, max= 1920, per=4.07%, avg=1367.58, stdev=775.14, samples=19 00:41:38.098 iops : min= 64, max= 480, avg=341.89, stdev=193.79, samples=19 00:41:38.098 lat (msec) : 50=93.12%, 100=0.46%, 250=5.05%, 500=1.38% 00:41:38.098 cpu : usr=96.77%, sys=2.06%, ctx=189, majf=0, minf=25 00:41:38.098 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:38.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.098 filename0: (groupid=0, jobs=1): err= 0: pid=465591: Fri Oct 11 23:04:39 2024 00:41:38.098 read: IOPS=348, BW=1394KiB/s (1427kB/s)(13.6MiB/10012msec) 00:41:38.098 slat (usec): min=9, max=178, avg=65.62, stdev=27.47 00:41:38.098 clat (msec): min=17, max=399, avg=45.36, stdev=50.72 00:41:38.098 lat (msec): min=17, max=399, avg=45.43, stdev=50.72 00:41:38.098 clat percentiles (msec): 00:41:38.098 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.098 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:41:38.098 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 226], 00:41:38.098 | 99.00th=[ 275], 99.50th=[ 296], 99.90th=[ 347], 99.95th=[ 401], 00:41:38.098 | 99.99th=[ 401] 00:41:38.098 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1388.80, stdev=776.02, samples=20 00:41:38.098 iops : min= 32, max= 480, avg=347.20, stdev=194.01, samples=20 00:41:38.098 lat (msec) : 20=0.52%, 50=93.46%, 100=0.06%, 250=4.01%, 500=1.95% 00:41:38.098 cpu : usr=97.48%, sys=1.54%, ctx=135, majf=0, minf=20 00:41:38.098 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:41:38.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.098 filename0: (groupid=0, jobs=1): err= 0: pid=465592: Fri Oct 11 23:04:39 2024 00:41:38.098 read: IOPS=348, BW=1393KiB/s (1426kB/s)(13.6MiB/10019msec) 00:41:38.098 slat (usec): min=8, max=373, avg=38.31, stdev=14.00 00:41:38.098 clat (msec): min=19, max=341, avg=45.60, stdev=49.69 00:41:38.098 lat (msec): min=19, max=341, avg=45.64, stdev=49.69 00:41:38.098 clat percentiles (msec): 00:41:38.098 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.098 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.098 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 215], 00:41:38.098 | 99.00th=[ 275], 99.50th=[ 305], 99.90th=[ 309], 99.95th=[ 342], 00:41:38.098 | 99.99th=[ 342] 00:41:38.098 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1387.45, stdev=773.94, samples=20 00:41:38.098 iops : min= 32, max= 480, avg=346.85, stdev=193.48, samples=20 00:41:38.098 lat (msec) : 20=0.43%, 50=93.15%, 100=0.46%, 250=4.53%, 500=1.43% 00:41:38.098 cpu : usr=98.29%, sys=1.31%, ctx=16, majf=0, minf=34 00:41:38.098 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:38.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.098 filename0: (groupid=0, jobs=1): err= 0: pid=465593: Fri Oct 11 23:04:39 2024 00:41:38.098 read: IOPS=348, BW=1393KiB/s (1426kB/s)(13.6MiB/10016msec) 00:41:38.098 slat (usec): min=12, max=130, avg=79.06, stdev=10.52 00:41:38.098 clat (msec): min=17, max=286, avg=45.25, stdev=49.09 00:41:38.098 lat (msec): min=18, max=286, avg=45.32, stdev=49.09 00:41:38.098 clat percentiles (msec): 00:41:38.098 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.098 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:41:38.098 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 228], 00:41:38.098 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 288], 99.95th=[ 288], 00:41:38.098 | 99.99th=[ 288] 00:41:38.098 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1388.80, stdev=775.01, samples=20 00:41:38.098 iops : min= 32, max= 480, avg=347.20, stdev=193.75, samples=20 00:41:38.098 lat (msec) : 20=0.46%, 50=93.12%, 100=0.06%, 250=4.93%, 500=1.43% 00:41:38.098 cpu : usr=96.69%, sys=1.99%, ctx=121, majf=0, minf=42 00:41:38.098 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:38.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.098 filename1: (groupid=0, jobs=1): err= 0: pid=465594: Fri Oct 11 23:04:39 2024 00:41:38.098 read: IOPS=348, BW=1394KiB/s (1428kB/s)(13.6MiB/10006msec) 00:41:38.098 slat (usec): min=10, max=120, avg=42.19, stdev=13.96 00:41:38.098 clat (msec): min=32, max=302, avg=45.51, stdev=47.88 00:41:38.098 lat (msec): min=32, max=302, avg=45.55, stdev=47.88 00:41:38.098 clat percentiles (msec): 00:41:38.098 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.098 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.098 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 190], 00:41:38.098 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 305], 00:41:38.098 | 99.99th=[ 305] 00:41:38.098 bw ( KiB/s): min= 256, max= 1920, per=4.07%, avg=1367.58, stdev=775.01, samples=19 00:41:38.098 iops : min= 64, max= 480, avg=341.89, stdev=193.75, samples=19 00:41:38.098 lat (msec) : 50=93.12%, 100=0.46%, 250=4.93%, 500=1.49% 00:41:38.098 cpu : usr=97.68%, sys=1.47%, ctx=102, majf=0, minf=36 00:41:38.098 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:38.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.098 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.098 filename1: (groupid=0, jobs=1): err= 0: pid=465595: Fri Oct 11 23:04:39 2024 00:41:38.098 read: IOPS=357, BW=1431KiB/s (1465kB/s)(14.0MiB/10006msec) 00:41:38.098 slat (usec): min=8, max=103, avg=39.26, stdev=17.25 00:41:38.098 clat (msec): min=21, max=269, avg=44.40, stdev=38.00 00:41:38.098 lat (msec): min=21, max=269, avg=44.44, stdev=37.99 00:41:38.098 clat percentiles (msec): 00:41:38.098 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.098 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.098 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 161], 00:41:38.099 | 99.00th=[ 201], 99.50th=[ 213], 99.90th=[ 271], 99.95th=[ 271], 00:41:38.099 | 99.99th=[ 271] 00:41:38.099 bw ( KiB/s): min= 256, max= 1920, per=4.18%, avg=1406.32, stdev=727.74, samples=19 00:41:38.099 iops : min= 64, max= 480, avg=351.58, stdev=181.93, samples=19 00:41:38.099 lat (msec) : 50=91.34%, 250=8.55%, 500=0.11% 00:41:38.099 cpu : usr=98.24%, sys=1.34%, ctx=17, majf=0, minf=32 00:41:38.099 IO depths : 1=5.6%, 2=11.5%, 4=24.0%, 8=52.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:41:38.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 issued rwts: total=3580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.099 filename1: (groupid=0, jobs=1): err= 0: pid=465596: Fri Oct 11 23:04:39 2024 00:41:38.099 read: IOPS=351, BW=1406KiB/s (1439kB/s)(13.8MiB/10017msec) 00:41:38.099 slat (usec): min=3, max=128, avg=39.19, stdev=17.76 00:41:38.099 clat (msec): min=25, max=340, avg=45.22, stdev=47.33 00:41:38.099 lat (msec): min=25, max=340, avg=45.26, stdev=47.34 00:41:38.099 clat percentiles (msec): 00:41:38.099 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.099 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.099 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 174], 00:41:38.099 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 342], 00:41:38.099 | 99.99th=[ 342] 00:41:38.099 bw ( KiB/s): min= 256, max= 2048, per=4.17%, avg=1401.60, stdev=743.44, samples=20 00:41:38.099 iops : min= 64, max= 512, avg=350.40, stdev=185.86, samples=20 00:41:38.099 lat (msec) : 50=93.24%, 100=0.85%, 250=4.55%, 500=1.36% 00:41:38.099 cpu : usr=97.15%, sys=1.78%, ctx=140, majf=0, minf=28 00:41:38.099 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:38.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 issued rwts: total=3520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.099 filename1: (groupid=0, jobs=1): err= 0: pid=465597: Fri Oct 11 23:04:39 2024 00:41:38.099 read: IOPS=346, BW=1388KiB/s (1421kB/s)(13.6MiB/10006msec) 00:41:38.099 slat (usec): min=9, max=111, avg=41.66, stdev=14.14 00:41:38.099 clat (msec): min=26, max=335, avg=45.74, stdev=50.04 00:41:38.099 lat (msec): min=26, max=335, avg=45.78, stdev=50.04 00:41:38.099 clat percentiles (msec): 00:41:38.099 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.099 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.099 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 226], 00:41:38.099 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 334], 99.95th=[ 334], 00:41:38.099 | 99.99th=[ 334] 00:41:38.099 bw ( KiB/s): min= 256, max= 1920, per=4.05%, avg=1360.84, stdev=784.66, samples=19 00:41:38.099 iops : min= 64, max= 480, avg=340.21, stdev=196.17, samples=19 00:41:38.099 lat (msec) : 50=93.61%, 100=0.40%, 250=3.97%, 500=2.02% 00:41:38.099 cpu : usr=98.21%, sys=1.39%, ctx=38, majf=0, minf=32 00:41:38.099 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:38.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 issued rwts: total=3472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.099 filename1: (groupid=0, jobs=1): err= 0: pid=465598: Fri Oct 11 23:04:39 2024 00:41:38.099 read: IOPS=354, BW=1417KiB/s (1452kB/s)(13.9MiB/10012msec) 00:41:38.099 slat (usec): min=6, max=124, avg=61.31, stdev=30.48 00:41:38.099 clat (msec): min=14, max=355, avg=44.61, stdev=51.61 00:41:38.099 lat (msec): min=14, max=355, avg=44.67, stdev=51.61 00:41:38.099 clat percentiles (msec): 00:41:38.099 | 1.00th=[ 20], 5.00th=[ 26], 10.00th=[ 32], 20.00th=[ 33], 00:41:38.099 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:41:38.099 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 228], 00:41:38.099 | 99.00th=[ 288], 99.50th=[ 330], 99.90th=[ 351], 99.95th=[ 355], 00:41:38.099 | 99.99th=[ 355] 00:41:38.099 bw ( KiB/s): min= 128, max= 2176, per=4.20%, avg=1412.80, stdev=805.29, samples=20 00:41:38.099 iops : min= 32, max= 544, avg=353.20, stdev=201.32, samples=20 00:41:38.099 lat (msec) : 20=2.03%, 50=92.11%, 100=0.06%, 250=3.83%, 500=1.97% 00:41:38.099 cpu : usr=97.75%, sys=1.46%, ctx=58, majf=0, minf=35 00:41:38.099 IO depths : 1=4.2%, 2=9.8%, 4=22.7%, 8=54.8%, 16=8.5%, 32=0.0%, >=64=0.0% 00:41:38.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 issued rwts: total=3548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.099 filename1: (groupid=0, jobs=1): err= 0: pid=465599: Fri Oct 11 23:04:39 2024 00:41:38.099 read: IOPS=348, BW=1393KiB/s (1427kB/s)(13.6MiB/10015msec) 00:41:38.099 slat (usec): min=8, max=119, avg=40.30, stdev=15.47 00:41:38.099 clat (msec): min=15, max=307, avg=45.57, stdev=49.57 00:41:38.099 lat (msec): min=15, max=307, avg=45.61, stdev=49.57 00:41:38.099 clat percentiles (msec): 00:41:38.099 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.099 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.099 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 215], 00:41:38.099 | 99.00th=[ 275], 99.50th=[ 300], 99.90th=[ 305], 99.95th=[ 309], 00:41:38.099 | 99.99th=[ 309] 00:41:38.099 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1388.40, stdev=774.73, samples=20 00:41:38.099 iops : min= 32, max= 480, avg=347.10, stdev=193.68, samples=20 00:41:38.099 lat (msec) : 20=0.32%, 50=93.26%, 100=0.46%, 250=4.47%, 500=1.49% 00:41:38.099 cpu : usr=98.40%, sys=1.18%, ctx=12, majf=0, minf=26 00:41:38.099 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:38.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.099 filename1: (groupid=0, jobs=1): err= 0: pid=465600: Fri Oct 11 23:04:39 2024 00:41:38.099 read: IOPS=348, BW=1393KiB/s (1426kB/s)(13.6MiB/10016msec) 00:41:38.099 slat (usec): min=8, max=121, avg=70.62, stdev=26.26 00:41:38.099 clat (msec): min=18, max=399, avg=45.33, stdev=50.82 00:41:38.099 lat (msec): min=18, max=399, avg=45.40, stdev=50.82 00:41:38.099 clat percentiles (msec): 00:41:38.099 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.099 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:41:38.099 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 226], 00:41:38.099 | 99.00th=[ 275], 99.50th=[ 300], 99.90th=[ 309], 99.95th=[ 401], 00:41:38.099 | 99.99th=[ 401] 00:41:38.099 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1388.80, stdev=775.01, samples=20 00:41:38.099 iops : min= 32, max= 480, avg=347.20, stdev=193.75, samples=20 00:41:38.099 lat (msec) : 20=0.52%, 50=93.52%, 250=3.96%, 500=2.01% 00:41:38.099 cpu : usr=97.73%, sys=1.57%, ctx=52, majf=0, minf=27 00:41:38.099 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:38.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.099 filename1: (groupid=0, jobs=1): err= 0: pid=465601: Fri Oct 11 23:04:39 2024 00:41:38.099 read: IOPS=348, BW=1393KiB/s (1426kB/s)(13.6MiB/10016msec) 00:41:38.099 slat (nsec): min=8447, max=62715, avg=23576.79, stdev=10277.99 00:41:38.099 clat (msec): min=17, max=300, avg=45.73, stdev=50.33 00:41:38.099 lat (msec): min=17, max=300, avg=45.75, stdev=50.33 00:41:38.099 clat percentiles (msec): 00:41:38.099 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.099 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.099 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 228], 00:41:38.099 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 300], 99.95th=[ 300], 00:41:38.099 | 99.99th=[ 300] 00:41:38.099 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1388.80, stdev=775.01, samples=20 00:41:38.099 iops : min= 32, max= 480, avg=347.20, stdev=193.75, samples=20 00:41:38.099 lat (msec) : 20=0.46%, 50=93.58%, 250=4.13%, 500=1.83% 00:41:38.099 cpu : usr=97.53%, sys=1.62%, ctx=134, majf=0, minf=30 00:41:38.099 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:38.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.099 filename2: (groupid=0, jobs=1): err= 0: pid=465602: Fri Oct 11 23:04:39 2024 00:41:38.099 read: IOPS=348, BW=1394KiB/s (1427kB/s)(13.6MiB/10012msec) 00:41:38.099 slat (usec): min=12, max=127, avg=82.78, stdev=11.87 00:41:38.099 clat (msec): min=18, max=274, avg=45.19, stdev=48.60 00:41:38.099 lat (msec): min=18, max=274, avg=45.28, stdev=48.59 00:41:38.099 clat percentiles (msec): 00:41:38.099 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.099 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:41:38.099 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 226], 00:41:38.099 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:41:38.099 | 99.99th=[ 275] 00:41:38.099 bw ( KiB/s): min= 256, max= 1920, per=4.13%, avg=1388.80, stdev=760.41, samples=20 00:41:38.099 iops : min= 64, max= 480, avg=347.20, stdev=190.10, samples=20 00:41:38.099 lat (msec) : 20=0.06%, 50=93.52%, 250=5.05%, 500=1.38% 00:41:38.099 cpu : usr=95.79%, sys=2.44%, ctx=208, majf=0, minf=25 00:41:38.099 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:38.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.099 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.099 filename2: (groupid=0, jobs=1): err= 0: pid=465603: Fri Oct 11 23:04:39 2024 00:41:38.099 read: IOPS=348, BW=1393KiB/s (1427kB/s)(13.6MiB/10013msec) 00:41:38.099 slat (nsec): min=7914, max=90039, avg=23930.54, stdev=12640.87 00:41:38.099 clat (msec): min=16, max=401, avg=45.71, stdev=50.56 00:41:38.099 lat (msec): min=16, max=401, avg=45.73, stdev=50.57 00:41:38.099 clat percentiles (msec): 00:41:38.099 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.099 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.099 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 226], 00:41:38.099 | 99.00th=[ 275], 99.50th=[ 296], 99.90th=[ 347], 99.95th=[ 401], 00:41:38.099 | 99.99th=[ 401] 00:41:38.099 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1388.80, stdev=776.16, samples=20 00:41:38.100 iops : min= 32, max= 480, avg=347.20, stdev=194.04, samples=20 00:41:38.100 lat (msec) : 20=0.69%, 50=93.29%, 100=0.06%, 250=4.13%, 500=1.83% 00:41:38.100 cpu : usr=97.90%, sys=1.32%, ctx=88, majf=0, minf=20 00:41:38.100 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:38.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.100 filename2: (groupid=0, jobs=1): err= 0: pid=465604: Fri Oct 11 23:04:39 2024 00:41:38.100 read: IOPS=348, BW=1394KiB/s (1427kB/s)(13.6MiB/10012msec) 00:41:38.100 slat (usec): min=8, max=110, avg=24.18, stdev=17.13 00:41:38.100 clat (msec): min=14, max=355, avg=45.70, stdev=51.72 00:41:38.100 lat (msec): min=14, max=355, avg=45.73, stdev=51.73 00:41:38.100 clat percentiles (msec): 00:41:38.100 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.100 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.100 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 228], 00:41:38.100 | 99.00th=[ 288], 99.50th=[ 330], 99.90th=[ 351], 99.95th=[ 355], 00:41:38.100 | 99.99th=[ 355] 00:41:38.100 bw ( KiB/s): min= 128, max= 1920, per=4.05%, avg=1360.84, stdev=795.91, samples=19 00:41:38.100 iops : min= 32, max= 480, avg=340.21, stdev=198.98, samples=19 00:41:38.100 lat (msec) : 20=0.86%, 50=93.18%, 100=0.06%, 250=3.90%, 500=2.01% 00:41:38.100 cpu : usr=98.04%, sys=1.41%, ctx=51, majf=0, minf=25 00:41:38.100 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:38.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.100 filename2: (groupid=0, jobs=1): err= 0: pid=465605: Fri Oct 11 23:04:39 2024 00:41:38.100 read: IOPS=348, BW=1394KiB/s (1428kB/s)(13.6MiB/10006msec) 00:41:38.100 slat (nsec): min=9382, max=86722, avg=40534.33, stdev=14505.78 00:41:38.100 clat (msec): min=26, max=307, avg=45.55, stdev=48.16 00:41:38.100 lat (msec): min=26, max=307, avg=45.59, stdev=48.16 00:41:38.100 clat percentiles (msec): 00:41:38.100 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.100 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.100 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 226], 00:41:38.100 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 309], 00:41:38.100 | 99.99th=[ 309] 00:41:38.100 bw ( KiB/s): min= 256, max= 1920, per=4.07%, avg=1367.58, stdev=775.14, samples=19 00:41:38.100 iops : min= 64, max= 480, avg=341.89, stdev=193.79, samples=19 00:41:38.100 lat (msec) : 50=93.18%, 100=0.40%, 250=4.99%, 500=1.43% 00:41:38.100 cpu : usr=97.99%, sys=1.49%, ctx=21, majf=0, minf=24 00:41:38.100 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:38.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.100 filename2: (groupid=0, jobs=1): err= 0: pid=465606: Fri Oct 11 23:04:39 2024 00:41:38.100 read: IOPS=351, BW=1406KiB/s (1439kB/s)(13.8MiB/10017msec) 00:41:38.100 slat (usec): min=6, max=126, avg=32.41, stdev=31.09 00:41:38.100 clat (msec): min=14, max=305, avg=45.25, stdev=47.48 00:41:38.100 lat (msec): min=14, max=305, avg=45.28, stdev=47.49 00:41:38.100 clat percentiles (msec): 00:41:38.100 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.100 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.100 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 190], 00:41:38.100 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 305], 00:41:38.100 | 99.99th=[ 305] 00:41:38.100 bw ( KiB/s): min= 256, max= 2048, per=4.17%, avg=1401.60, stdev=743.31, samples=20 00:41:38.100 iops : min= 64, max= 512, avg=350.40, stdev=185.83, samples=20 00:41:38.100 lat (msec) : 20=0.45%, 50=92.73%, 100=0.91%, 250=4.43%, 500=1.48% 00:41:38.100 cpu : usr=98.10%, sys=1.32%, ctx=122, majf=0, minf=36 00:41:38.100 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:38.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 issued rwts: total=3520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.100 filename2: (groupid=0, jobs=1): err= 0: pid=465607: Fri Oct 11 23:04:39 2024 00:41:38.100 read: IOPS=346, BW=1388KiB/s (1421kB/s)(13.6MiB/10006msec) 00:41:38.100 slat (usec): min=9, max=103, avg=42.43, stdev=14.49 00:41:38.100 clat (msec): min=26, max=335, avg=45.74, stdev=49.93 00:41:38.100 lat (msec): min=26, max=335, avg=45.78, stdev=49.93 00:41:38.100 clat percentiles (msec): 00:41:38.100 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.100 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.100 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 226], 00:41:38.100 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 334], 99.95th=[ 334], 00:41:38.100 | 99.99th=[ 334] 00:41:38.100 bw ( KiB/s): min= 256, max= 1920, per=4.05%, avg=1360.84, stdev=784.66, samples=19 00:41:38.100 iops : min= 64, max= 480, avg=340.21, stdev=196.17, samples=19 00:41:38.100 lat (msec) : 50=93.61%, 100=0.40%, 250=3.97%, 500=2.02% 00:41:38.100 cpu : usr=97.88%, sys=1.45%, ctx=69, majf=0, minf=27 00:41:38.100 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:38.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 issued rwts: total=3472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.100 filename2: (groupid=0, jobs=1): err= 0: pid=465608: Fri Oct 11 23:04:39 2024 00:41:38.100 read: IOPS=366, BW=1464KiB/s (1499kB/s)(14.3MiB/10014msec) 00:41:38.100 slat (nsec): min=6382, max=92618, avg=18698.67, stdev=12959.36 00:41:38.100 clat (msec): min=13, max=332, avg=43.60, stdev=50.57 00:41:38.100 lat (msec): min=13, max=332, avg=43.62, stdev=50.58 00:41:38.100 clat percentiles (msec): 00:41:38.100 | 1.00th=[ 21], 5.00th=[ 22], 10.00th=[ 25], 20.00th=[ 30], 00:41:38.100 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.100 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 228], 00:41:38.100 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:41:38.100 | 99.99th=[ 334] 00:41:38.100 bw ( KiB/s): min= 128, max= 2320, per=4.35%, avg=1462.40, stdev=847.47, samples=20 00:41:38.100 iops : min= 32, max= 580, avg=365.60, stdev=211.87, samples=20 00:41:38.100 lat (msec) : 20=0.93%, 50=93.40%, 250=3.87%, 500=1.80% 00:41:38.100 cpu : usr=98.10%, sys=1.43%, ctx=15, majf=0, minf=29 00:41:38.100 IO depths : 1=0.3%, 2=0.9%, 4=3.7%, 8=78.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:41:38.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 complete : 0=0.0%, 4=89.6%, 8=8.6%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 issued rwts: total=3666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.100 filename2: (groupid=0, jobs=1): err= 0: pid=465609: Fri Oct 11 23:04:39 2024 00:41:38.100 read: IOPS=348, BW=1393KiB/s (1426kB/s)(13.6MiB/10017msec) 00:41:38.100 slat (usec): min=8, max=117, avg=38.74, stdev=28.97 00:41:38.100 clat (msec): min=17, max=334, avg=45.61, stdev=50.34 00:41:38.100 lat (msec): min=17, max=334, avg=45.65, stdev=50.34 00:41:38.100 clat percentiles (msec): 00:41:38.100 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:38.100 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:38.100 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 226], 00:41:38.100 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 300], 99.95th=[ 334], 00:41:38.100 | 99.99th=[ 334] 00:41:38.100 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1388.80, stdev=775.01, samples=20 00:41:38.100 iops : min= 32, max= 480, avg=347.20, stdev=193.75, samples=20 00:41:38.100 lat (msec) : 20=0.46%, 50=93.58%, 250=4.13%, 500=1.83% 00:41:38.100 cpu : usr=98.34%, sys=1.23%, ctx=24, majf=0, minf=31 00:41:38.100 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:38.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.100 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:38.100 00:41:38.100 Run status group 0 (all jobs): 00:41:38.100 READ: bw=32.8MiB/s (34.4MB/s), 1388KiB/s-1464KiB/s (1421kB/s-1499kB/s), io=329MiB (345MB), run=10001-10026msec 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:38.100 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.101 bdev_null0 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.101 [2024-10-11 23:04:39.909073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.101 bdev_null1 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:38.101 { 00:41:38.101 "params": { 00:41:38.101 "name": "Nvme$subsystem", 00:41:38.101 "trtype": "$TEST_TRANSPORT", 00:41:38.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:38.101 "adrfam": "ipv4", 00:41:38.101 "trsvcid": "$NVMF_PORT", 00:41:38.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:38.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:38.101 "hdgst": ${hdgst:-false}, 00:41:38.101 "ddgst": ${ddgst:-false} 00:41:38.101 }, 00:41:38.101 "method": "bdev_nvme_attach_controller" 00:41:38.101 } 00:41:38.101 EOF 00:41:38.101 )") 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:38.101 { 00:41:38.101 "params": { 00:41:38.101 "name": "Nvme$subsystem", 00:41:38.101 "trtype": "$TEST_TRANSPORT", 00:41:38.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:38.101 "adrfam": "ipv4", 00:41:38.101 "trsvcid": "$NVMF_PORT", 00:41:38.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:38.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:38.101 "hdgst": ${hdgst:-false}, 00:41:38.101 "ddgst": ${ddgst:-false} 00:41:38.101 }, 00:41:38.101 "method": "bdev_nvme_attach_controller" 00:41:38.101 } 00:41:38.101 EOF 00:41:38.101 )") 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:38.101 "params": { 00:41:38.101 "name": "Nvme0", 00:41:38.101 "trtype": "tcp", 00:41:38.101 "traddr": "10.0.0.2", 00:41:38.101 "adrfam": "ipv4", 00:41:38.101 "trsvcid": "4420", 00:41:38.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:38.101 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:38.101 "hdgst": false, 00:41:38.101 "ddgst": false 00:41:38.101 }, 00:41:38.101 "method": "bdev_nvme_attach_controller" 00:41:38.101 },{ 00:41:38.101 "params": { 00:41:38.101 "name": "Nvme1", 00:41:38.101 "trtype": "tcp", 00:41:38.101 "traddr": "10.0.0.2", 00:41:38.101 "adrfam": "ipv4", 00:41:38.101 "trsvcid": "4420", 00:41:38.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:38.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:38.101 "hdgst": false, 00:41:38.101 "ddgst": false 00:41:38.101 }, 00:41:38.101 "method": "bdev_nvme_attach_controller" 00:41:38.101 }' 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:38.101 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:38.102 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:38.102 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:38.102 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:38.102 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:38.102 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:38.102 23:04:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:38.102 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:38.102 ... 00:41:38.102 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:38.102 ... 00:41:38.102 fio-3.35 00:41:38.102 Starting 4 threads 00:41:43.365 00:41:43.365 filename0: (groupid=0, jobs=1): err= 0: pid=466987: Fri Oct 11 23:04:45 2024 00:41:43.365 read: IOPS=1861, BW=14.5MiB/s (15.2MB/s)(72.8MiB/5003msec) 00:41:43.365 slat (nsec): min=6589, max=58946, avg=17910.97, stdev=9055.62 00:41:43.365 clat (usec): min=699, max=7772, avg=4232.08, stdev=583.60 00:41:43.365 lat (usec): min=718, max=7786, avg=4249.99, stdev=584.28 00:41:43.365 clat percentiles (usec): 00:41:43.365 | 1.00th=[ 2212], 5.00th=[ 3425], 10.00th=[ 3720], 20.00th=[ 4015], 00:41:43.365 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:43.365 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 5014], 00:41:43.365 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7635], 00:41:43.365 | 99.99th=[ 7767] 00:41:43.365 bw ( KiB/s): min=14592, max=15888, per=25.14%, avg=14892.80, stdev=375.22, samples=10 00:41:43.365 iops : min= 1824, max= 1986, avg=1861.60, stdev=46.90, samples=10 00:41:43.365 lat (usec) : 750=0.01%, 1000=0.03% 00:41:43.366 lat (msec) : 2=0.75%, 4=18.29%, 10=80.92% 00:41:43.366 cpu : usr=96.12%, sys=3.40%, ctx=9, majf=0, minf=50 00:41:43.366 IO depths : 1=0.2%, 2=16.5%, 4=56.1%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:43.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:43.366 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:43.366 issued rwts: total=9313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:43.366 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:43.366 filename0: (groupid=0, jobs=1): err= 0: pid=466988: Fri Oct 11 23:04:45 2024 00:41:43.366 read: IOPS=1821, BW=14.2MiB/s (14.9MB/s)(71.2MiB/5002msec) 00:41:43.366 slat (nsec): min=7017, max=61341, avg=17314.87, stdev=8849.63 00:41:43.366 clat (usec): min=759, max=7794, avg=4330.15, stdev=637.22 00:41:43.366 lat (usec): min=773, max=7802, avg=4347.46, stdev=637.07 00:41:43.366 clat percentiles (usec): 00:41:43.366 | 1.00th=[ 2147], 5.00th=[ 3589], 10.00th=[ 3851], 20.00th=[ 4080], 00:41:43.366 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:41:43.366 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5407], 00:41:43.366 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 7504], 99.95th=[ 7570], 00:41:43.366 | 99.99th=[ 7767] 00:41:43.366 bw ( KiB/s): min=14156, max=14832, per=24.59%, avg=14566.00, stdev=218.55, samples=10 00:41:43.366 iops : min= 1769, max= 1854, avg=1820.70, stdev=27.42, samples=10 00:41:43.366 lat (usec) : 1000=0.12% 00:41:43.366 lat (msec) : 2=0.75%, 4=12.82%, 10=86.31% 00:41:43.366 cpu : usr=95.42%, sys=4.08%, ctx=8, majf=0, minf=31 00:41:43.366 IO depths : 1=0.2%, 2=14.3%, 4=58.5%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:43.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:43.366 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:43.366 issued rwts: total=9110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:43.366 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:43.366 filename1: (groupid=0, jobs=1): err= 0: pid=466989: Fri Oct 11 23:04:45 2024 00:41:43.366 read: IOPS=1834, BW=14.3MiB/s (15.0MB/s)(71.8MiB/5005msec) 00:41:43.366 slat (nsec): min=7381, max=57877, avg=17639.68, stdev=8700.15 00:41:43.366 clat (usec): min=740, max=7788, avg=4296.61, stdev=612.52 00:41:43.366 lat (usec): min=752, max=7807, avg=4314.24, stdev=612.50 00:41:43.366 clat percentiles (usec): 00:41:43.366 | 1.00th=[ 2278], 5.00th=[ 3556], 10.00th=[ 3818], 20.00th=[ 4080], 00:41:43.366 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:41:43.366 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5211], 00:41:43.366 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[ 7635], 99.95th=[ 7701], 00:41:43.366 | 99.99th=[ 7767] 00:41:43.366 bw ( KiB/s): min=14352, max=14960, per=24.79%, avg=14681.60, stdev=215.35, samples=10 00:41:43.366 iops : min= 1794, max= 1870, avg=1835.20, stdev=26.92, samples=10 00:41:43.366 lat (usec) : 750=0.01%, 1000=0.11% 00:41:43.366 lat (msec) : 2=0.69%, 4=14.73%, 10=84.46% 00:41:43.366 cpu : usr=95.92%, sys=3.62%, ctx=9, majf=0, minf=84 00:41:43.366 IO depths : 1=0.2%, 2=14.3%, 4=58.4%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:43.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:43.366 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:43.366 issued rwts: total=9184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:43.366 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:43.366 filename1: (groupid=0, jobs=1): err= 0: pid=466990: Fri Oct 11 23:04:45 2024 00:41:43.366 read: IOPS=1888, BW=14.8MiB/s (15.5MB/s)(73.8MiB/5004msec) 00:41:43.366 slat (nsec): min=7062, max=88187, avg=19105.50, stdev=8025.10 00:41:43.366 clat (usec): min=811, max=7829, avg=4169.41, stdev=476.67 00:41:43.366 lat (usec): min=825, max=7851, avg=4188.51, stdev=477.61 00:41:43.366 clat percentiles (usec): 00:41:43.366 | 1.00th=[ 2638], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3916], 00:41:43.366 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:43.366 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4621], 00:41:43.366 | 99.00th=[ 5538], 99.50th=[ 6259], 99.90th=[ 7111], 99.95th=[ 7439], 00:41:43.366 | 99.99th=[ 7832] 00:41:43.366 bw ( KiB/s): min=14576, max=15952, per=25.51%, avg=15107.10, stdev=420.97, samples=10 00:41:43.366 iops : min= 1822, max= 1994, avg=1888.30, stdev=52.52, samples=10 00:41:43.366 lat (usec) : 1000=0.02% 00:41:43.366 lat (msec) : 2=0.32%, 4=23.07%, 10=76.59% 00:41:43.366 cpu : usr=95.18%, sys=4.24%, ctx=13, majf=0, minf=51 00:41:43.366 IO depths : 1=0.4%, 2=16.1%, 4=56.8%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:43.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:43.366 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:43.366 issued rwts: total=9448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:43.366 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:43.366 00:41:43.366 Run status group 0 (all jobs): 00:41:43.366 READ: bw=57.8MiB/s (60.7MB/s), 14.2MiB/s-14.8MiB/s (14.9MB/s-15.5MB/s), io=289MiB (304MB), run=5002-5005msec 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.366 00:41:43.366 real 0m24.045s 00:41:43.366 user 4m32.254s 00:41:43.366 sys 0m6.346s 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:43.366 23:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:43.366 ************************************ 00:41:43.366 END TEST fio_dif_rand_params 00:41:43.366 ************************************ 00:41:43.366 23:04:46 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:43.366 23:04:46 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:43.366 23:04:46 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:43.366 23:04:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:43.366 ************************************ 00:41:43.366 START TEST fio_dif_digest 00:41:43.366 ************************************ 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:43.366 bdev_null0 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:43.366 [2024-10-11 23:04:46.226080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:43.366 23:04:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:43.367 { 00:41:43.367 "params": { 00:41:43.367 "name": "Nvme$subsystem", 00:41:43.367 "trtype": "$TEST_TRANSPORT", 00:41:43.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:43.367 "adrfam": "ipv4", 00:41:43.367 "trsvcid": "$NVMF_PORT", 00:41:43.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:43.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:43.367 "hdgst": ${hdgst:-false}, 00:41:43.367 "ddgst": ${ddgst:-false} 00:41:43.367 }, 00:41:43.367 "method": "bdev_nvme_attach_controller" 00:41:43.367 } 00:41:43.367 EOF 00:41:43.367 )") 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:43.367 "params": { 00:41:43.367 "name": "Nvme0", 00:41:43.367 "trtype": "tcp", 00:41:43.367 "traddr": "10.0.0.2", 00:41:43.367 "adrfam": "ipv4", 00:41:43.367 "trsvcid": "4420", 00:41:43.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:43.367 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:43.367 "hdgst": true, 00:41:43.367 "ddgst": true 00:41:43.367 }, 00:41:43.367 "method": "bdev_nvme_attach_controller" 00:41:43.367 }' 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:43.367 23:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:43.367 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:43.367 ... 00:41:43.367 fio-3.35 00:41:43.367 Starting 3 threads 00:41:55.561 00:41:55.561 filename0: (groupid=0, jobs=1): err= 0: pid=467741: Fri Oct 11 23:04:57 2024 00:41:55.561 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(253MiB/10047msec) 00:41:55.561 slat (nsec): min=5564, max=70108, avg=14610.06, stdev=4234.38 00:41:55.561 clat (usec): min=11842, max=52255, avg=14859.07, stdev=1454.33 00:41:55.561 lat (usec): min=11860, max=52269, avg=14873.68, stdev=1454.39 00:41:55.561 clat percentiles (usec): 00:41:55.561 | 1.00th=[12780], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:41:55.561 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15008], 00:41:55.561 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15926], 95.00th=[16450], 00:41:55.561 | 99.00th=[17433], 99.50th=[17433], 99.90th=[19006], 99.95th=[47973], 00:41:55.561 | 99.99th=[52167] 00:41:55.561 bw ( KiB/s): min=25344, max=26880, per=34.27%, avg=25868.80, stdev=419.21, samples=20 00:41:55.561 iops : min= 198, max= 210, avg=202.10, stdev= 3.28, samples=20 00:41:55.561 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:41:55.561 cpu : usr=91.93%, sys=7.47%, ctx=21, majf=0, minf=167 00:41:55.561 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.561 issued rwts: total=2023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.561 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:55.561 filename0: (groupid=0, jobs=1): err= 0: pid=467742: Fri Oct 11 23:04:57 2024 00:41:55.561 read: IOPS=196, BW=24.5MiB/s (25.7MB/s)(245MiB/10009msec) 00:41:55.561 slat (nsec): min=8226, max=60309, avg=15537.53, stdev=4332.17 00:41:55.561 clat (usec): min=11351, max=20230, avg=15276.27, stdev=1096.44 00:41:55.561 lat (usec): min=11365, max=20269, avg=15291.80, stdev=1096.29 00:41:55.561 clat percentiles (usec): 00:41:55.561 | 1.00th=[12649], 5.00th=[13566], 10.00th=[13960], 20.00th=[14353], 00:41:55.561 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:41:55.561 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16581], 95.00th=[17171], 00:41:55.561 | 99.00th=[18220], 99.50th=[18482], 99.90th=[20317], 99.95th=[20317], 00:41:55.561 | 99.99th=[20317] 00:41:55.561 bw ( KiB/s): min=24576, max=25856, per=33.23%, avg=25088.00, stdev=380.62, samples=20 00:41:55.561 iops : min= 192, max= 202, avg=196.00, stdev= 2.97, samples=20 00:41:55.561 lat (msec) : 20=99.90%, 50=0.10% 00:41:55.561 cpu : usr=93.14%, sys=6.33%, ctx=31, majf=0, minf=175 00:41:55.561 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.561 issued rwts: total=1963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.561 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:55.561 filename0: (groupid=0, jobs=1): err= 0: pid=467743: Fri Oct 11 23:04:57 2024 00:41:55.561 read: IOPS=193, BW=24.1MiB/s (25.3MB/s)(243MiB/10048msec) 00:41:55.561 slat (nsec): min=5715, max=52875, avg=14837.89, stdev=3705.49 00:41:55.561 clat (usec): min=11684, max=49709, avg=15497.47, stdev=1509.82 00:41:55.561 lat (usec): min=11704, max=49723, avg=15512.30, stdev=1509.80 00:41:55.561 clat percentiles (usec): 00:41:55.561 | 1.00th=[13042], 5.00th=[13829], 10.00th=[14222], 20.00th=[14615], 00:41:55.561 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:41:55.561 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17171], 00:41:55.561 | 99.00th=[18220], 99.50th=[18482], 99.90th=[48497], 99.95th=[49546], 00:41:55.561 | 99.99th=[49546] 00:41:55.561 bw ( KiB/s): min=23808, max=25600, per=32.84%, avg=24793.60, stdev=471.86, samples=20 00:41:55.561 iops : min= 186, max= 200, avg=193.70, stdev= 3.69, samples=20 00:41:55.561 lat (msec) : 20=99.90%, 50=0.10% 00:41:55.561 cpu : usr=92.60%, sys=6.84%, ctx=26, majf=0, minf=138 00:41:55.561 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.561 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.561 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:55.561 00:41:55.561 Run status group 0 (all jobs): 00:41:55.561 READ: bw=73.7MiB/s (77.3MB/s), 24.1MiB/s-25.2MiB/s (25.3MB/s-26.4MB/s), io=741MiB (777MB), run=10009-10048msec 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:55.561 00:41:55.561 real 0m11.219s 00:41:55.561 user 0m29.199s 00:41:55.561 sys 0m2.354s 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:55.561 23:04:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:55.561 ************************************ 00:41:55.561 END TEST fio_dif_digest 00:41:55.561 ************************************ 00:41:55.562 23:04:57 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:55.562 23:04:57 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:55.562 rmmod nvme_tcp 00:41:55.562 rmmod nvme_fabrics 00:41:55.562 rmmod nvme_keyring 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 461702 ']' 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 461702 00:41:55.562 23:04:57 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 461702 ']' 00:41:55.562 23:04:57 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 461702 00:41:55.562 23:04:57 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:41:55.562 23:04:57 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:55.562 23:04:57 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 461702 00:41:55.562 23:04:57 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:55.562 23:04:57 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:55.562 23:04:57 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 461702' 00:41:55.562 killing process with pid 461702 00:41:55.562 23:04:57 nvmf_dif -- common/autotest_common.sh@969 -- # kill 461702 00:41:55.562 23:04:57 nvmf_dif -- common/autotest_common.sh@974 -- # wait 461702 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:41:55.562 23:04:57 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:55.821 Waiting for block devices as requested 00:41:55.821 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:55.821 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:56.080 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:56.080 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:56.080 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:56.340 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:56.340 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:56.340 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:56.340 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:56.599 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:56.599 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:56.599 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:56.857 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:56.857 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:56.857 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:56.857 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:57.115 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:57.115 23:05:00 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:57.115 23:05:00 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:57.115 23:05:00 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:57.115 23:05:00 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:57.115 23:05:00 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:41:57.115 23:05:00 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:41:57.115 23:05:00 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:57.115 23:05:00 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:57.115 23:05:00 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:57.115 23:05:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:57.115 23:05:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:59.648 23:05:02 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:59.648 00:41:59.648 real 1m6.722s 00:41:59.648 user 6m28.557s 00:41:59.648 sys 0m18.321s 00:41:59.648 23:05:02 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:59.648 23:05:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:59.648 ************************************ 00:41:59.648 END TEST nvmf_dif 00:41:59.648 ************************************ 00:41:59.648 23:05:02 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:59.648 23:05:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:59.648 23:05:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:59.648 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:41:59.648 ************************************ 00:41:59.648 START TEST nvmf_abort_qd_sizes 00:41:59.648 ************************************ 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:59.648 * Looking for test storage... 00:41:59.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:59.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.648 --rc genhtml_branch_coverage=1 00:41:59.648 --rc genhtml_function_coverage=1 00:41:59.648 --rc genhtml_legend=1 00:41:59.648 --rc geninfo_all_blocks=1 00:41:59.648 --rc geninfo_unexecuted_blocks=1 00:41:59.648 00:41:59.648 ' 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:59.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.648 --rc genhtml_branch_coverage=1 00:41:59.648 --rc genhtml_function_coverage=1 00:41:59.648 --rc genhtml_legend=1 00:41:59.648 --rc geninfo_all_blocks=1 00:41:59.648 --rc geninfo_unexecuted_blocks=1 00:41:59.648 00:41:59.648 ' 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:59.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.648 --rc genhtml_branch_coverage=1 00:41:59.648 --rc genhtml_function_coverage=1 00:41:59.648 --rc genhtml_legend=1 00:41:59.648 --rc geninfo_all_blocks=1 00:41:59.648 --rc geninfo_unexecuted_blocks=1 00:41:59.648 00:41:59.648 ' 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:59.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.648 --rc genhtml_branch_coverage=1 00:41:59.648 --rc genhtml_function_coverage=1 00:41:59.648 --rc genhtml_legend=1 00:41:59.648 --rc geninfo_all_blocks=1 00:41:59.648 --rc geninfo_unexecuted_blocks=1 00:41:59.648 00:41:59.648 ' 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:59.648 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:59.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:59.649 23:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:01.550 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:01.550 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:01.550 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:01.550 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:01.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:01.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:42:01.550 00:42:01.550 --- 10.0.0.2 ping statistics --- 00:42:01.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:01.550 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:42:01.550 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:01.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:01.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:42:01.808 00:42:01.808 --- 10.0.0.1 ping statistics --- 00:42:01.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:01.808 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:42:01.808 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:01.808 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:42:01.808 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:42:01.808 23:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:02.742 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:02.742 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:02.742 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:02.742 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:02.742 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:02.742 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:02.742 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:02.742 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:02.999 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:02.999 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:02.999 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:02.999 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:02.999 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:02.999 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:02.999 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:02.999 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:03.934 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:03.934 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:03.934 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:03.934 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:03.934 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:03.934 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:03.934 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:03.934 23:05:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:03.934 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:03.934 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:03.934 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:03.935 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=472675 00:42:03.935 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:03.935 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 472675 00:42:03.935 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 472675 ']' 00:42:03.935 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:03.935 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:03.935 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:03.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:03.935 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:03.935 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:03.935 [2024-10-11 23:05:07.190955] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:42:03.935 [2024-10-11 23:05:07.191033] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:04.192 [2024-10-11 23:05:07.258950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:04.192 [2024-10-11 23:05:07.307901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:04.192 [2024-10-11 23:05:07.307952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:04.192 [2024-10-11 23:05:07.307980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:04.192 [2024-10-11 23:05:07.307991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:04.192 [2024-10-11 23:05:07.308002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:04.192 [2024-10-11 23:05:07.309468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:04.192 [2024-10-11 23:05:07.309532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:04.192 [2024-10-11 23:05:07.311569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:04.192 [2024-10-11 23:05:07.311581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:04.192 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:04.192 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:42:04.192 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:04.192 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:04.192 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:04.192 23:05:07 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:04.193 23:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:04.450 ************************************ 00:42:04.450 START TEST spdk_target_abort 00:42:04.450 ************************************ 00:42:04.450 23:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:42:04.450 23:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:04.450 23:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:42:04.450 23:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.450 23:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:07.727 spdk_targetn1 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:07.728 [2024-10-11 23:05:10.320810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:07.728 [2024-10-11 23:05:10.359683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:07.728 23:05:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:11.016 Initializing NVMe Controllers 00:42:11.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:11.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:11.016 Initialization complete. Launching workers. 00:42:11.016 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11897, failed: 0 00:42:11.016 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1155, failed to submit 10742 00:42:11.016 success 687, unsuccessful 468, failed 0 00:42:11.016 23:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:11.016 23:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:14.290 Initializing NVMe Controllers 00:42:14.290 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:14.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:14.290 Initialization complete. Launching workers. 00:42:14.290 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8637, failed: 0 00:42:14.290 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1264, failed to submit 7373 00:42:14.290 success 327, unsuccessful 937, failed 0 00:42:14.290 23:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:14.290 23:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:16.815 Initializing NVMe Controllers 00:42:16.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:16.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:16.816 Initialization complete. Launching workers. 00:42:16.816 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31571, failed: 0 00:42:16.816 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2795, failed to submit 28776 00:42:16.816 success 535, unsuccessful 2260, failed 0 00:42:16.816 23:05:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:16.816 23:05:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.816 23:05:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:16.816 23:05:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.816 23:05:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:16.816 23:05:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.816 23:05:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:18.188 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:18.188 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 472675 00:42:18.188 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 472675 ']' 00:42:18.188 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 472675 00:42:18.188 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:42:18.188 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:18.188 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 472675 00:42:18.188 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:18.188 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:18.188 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 472675' 00:42:18.188 killing process with pid 472675 00:42:18.188 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 472675 00:42:18.188 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 472675 00:42:18.446 00:42:18.446 real 0m14.152s 00:42:18.446 user 0m54.023s 00:42:18.446 sys 0m2.301s 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:18.446 ************************************ 00:42:18.446 END TEST spdk_target_abort 00:42:18.446 ************************************ 00:42:18.446 23:05:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:18.446 23:05:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:18.446 23:05:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:18.446 23:05:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:18.446 ************************************ 00:42:18.446 START TEST kernel_target_abort 00:42:18.446 ************************************ 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:18.446 23:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:19.824 Waiting for block devices as requested 00:42:19.824 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:19.824 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:20.084 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:20.084 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:20.084 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:20.343 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:20.343 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:20.343 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:20.343 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:20.601 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:20.601 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:20.601 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:20.601 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:20.859 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:20.859 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:20.859 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:20.859 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:21.117 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:42:21.117 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:21.118 No valid GPT data, bailing 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:21.118 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:21.376 00:42:21.376 Discovery Log Number of Records 2, Generation counter 2 00:42:21.376 =====Discovery Log Entry 0====== 00:42:21.376 trtype: tcp 00:42:21.376 adrfam: ipv4 00:42:21.376 subtype: current discovery subsystem 00:42:21.376 treq: not specified, sq flow control disable supported 00:42:21.376 portid: 1 00:42:21.376 trsvcid: 4420 00:42:21.376 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:21.376 traddr: 10.0.0.1 00:42:21.376 eflags: none 00:42:21.376 sectype: none 00:42:21.376 =====Discovery Log Entry 1====== 00:42:21.376 trtype: tcp 00:42:21.376 adrfam: ipv4 00:42:21.376 subtype: nvme subsystem 00:42:21.376 treq: not specified, sq flow control disable supported 00:42:21.376 portid: 1 00:42:21.376 trsvcid: 4420 00:42:21.376 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:21.376 traddr: 10.0.0.1 00:42:21.376 eflags: none 00:42:21.376 sectype: none 00:42:21.376 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:21.376 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:21.376 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:21.376 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:21.376 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:21.376 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:21.376 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:21.376 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:21.376 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:21.376 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:21.376 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:21.376 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:21.377 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:21.377 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:21.377 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:21.377 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:21.377 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:21.377 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:21.377 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:21.377 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:21.377 23:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:24.658 Initializing NVMe Controllers 00:42:24.658 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:24.658 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:24.658 Initialization complete. Launching workers. 00:42:24.658 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57568, failed: 0 00:42:24.658 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 57568, failed to submit 0 00:42:24.658 success 0, unsuccessful 57568, failed 0 00:42:24.658 23:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:24.658 23:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:27.941 Initializing NVMe Controllers 00:42:27.941 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:27.941 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:27.941 Initialization complete. Launching workers. 00:42:27.941 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100221, failed: 0 00:42:27.941 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25254, failed to submit 74967 00:42:27.941 success 0, unsuccessful 25254, failed 0 00:42:27.941 23:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:27.941 23:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:30.471 Initializing NVMe Controllers 00:42:30.471 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:30.471 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:30.471 Initialization complete. Launching workers. 00:42:30.471 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96937, failed: 0 00:42:30.471 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24218, failed to submit 72719 00:42:30.471 success 0, unsuccessful 24218, failed 0 00:42:30.471 23:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:30.471 23:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:30.471 23:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:42:30.471 23:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:30.471 23:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:30.730 23:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:30.730 23:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:30.730 23:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:42:30.730 23:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:42:30.730 23:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:31.665 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:31.665 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:31.665 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:31.665 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:31.665 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:31.923 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:31.923 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:31.923 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:31.923 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:31.923 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:31.923 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:31.923 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:31.923 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:31.923 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:31.923 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:31.923 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:32.858 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:32.858 00:42:32.858 real 0m14.378s 00:42:32.858 user 0m6.580s 00:42:32.858 sys 0m3.324s 00:42:32.858 23:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:32.858 23:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:32.858 ************************************ 00:42:32.858 END TEST kernel_target_abort 00:42:32.858 ************************************ 00:42:32.858 23:05:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:32.858 23:05:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:32.858 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:32.858 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:32.858 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:32.858 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:32.858 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:32.858 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:32.858 rmmod nvme_tcp 00:42:32.858 rmmod nvme_fabrics 00:42:32.858 rmmod nvme_keyring 00:42:33.116 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:33.116 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:33.116 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:33.116 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 472675 ']' 00:42:33.116 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 472675 00:42:33.116 23:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 472675 ']' 00:42:33.116 23:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 472675 00:42:33.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (472675) - No such process 00:42:33.116 23:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 472675 is not found' 00:42:33.116 Process with pid 472675 is not found 00:42:33.116 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:42:33.116 23:05:36 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:34.053 Waiting for block devices as requested 00:42:34.053 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:34.312 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:34.312 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:34.571 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:34.571 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:34.571 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:34.831 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:34.831 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:34.831 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:34.831 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:35.090 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:35.090 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:35.090 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:35.090 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:35.350 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:35.350 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:35.350 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:35.350 23:05:38 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:35.350 23:05:38 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:35.350 23:05:38 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:35.350 23:05:38 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:42:35.350 23:05:38 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:35.350 23:05:38 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:42:35.350 23:05:38 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:35.350 23:05:38 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:35.350 23:05:38 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:35.350 23:05:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:35.350 23:05:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:37.888 23:05:40 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:37.888 00:42:37.888 real 0m38.269s 00:42:37.888 user 1m3.009s 00:42:37.888 sys 0m9.117s 00:42:37.888 23:05:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:37.888 23:05:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:37.888 ************************************ 00:42:37.888 END TEST nvmf_abort_qd_sizes 00:42:37.888 ************************************ 00:42:37.888 23:05:40 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:37.888 23:05:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:37.888 23:05:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:37.888 23:05:40 -- common/autotest_common.sh@10 -- # set +x 00:42:37.888 ************************************ 00:42:37.888 START TEST keyring_file 00:42:37.888 ************************************ 00:42:37.888 23:05:40 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:37.888 * Looking for test storage... 00:42:37.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:37.888 23:05:40 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:37.888 23:05:40 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:42:37.888 23:05:40 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:37.888 23:05:40 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:37.888 23:05:40 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:37.888 23:05:40 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:37.888 23:05:40 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:37.888 23:05:40 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:37.888 23:05:40 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:37.888 23:05:40 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:37.889 23:05:40 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:37.889 23:05:40 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:37.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:37.889 --rc genhtml_branch_coverage=1 00:42:37.889 --rc genhtml_function_coverage=1 00:42:37.889 --rc genhtml_legend=1 00:42:37.889 --rc geninfo_all_blocks=1 00:42:37.889 --rc geninfo_unexecuted_blocks=1 00:42:37.889 00:42:37.889 ' 00:42:37.889 23:05:40 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:37.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:37.889 --rc genhtml_branch_coverage=1 00:42:37.889 --rc genhtml_function_coverage=1 00:42:37.889 --rc genhtml_legend=1 00:42:37.889 --rc geninfo_all_blocks=1 00:42:37.889 --rc geninfo_unexecuted_blocks=1 00:42:37.889 00:42:37.889 ' 00:42:37.889 23:05:40 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:37.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:37.889 --rc genhtml_branch_coverage=1 00:42:37.889 --rc genhtml_function_coverage=1 00:42:37.889 --rc genhtml_legend=1 00:42:37.889 --rc geninfo_all_blocks=1 00:42:37.889 --rc geninfo_unexecuted_blocks=1 00:42:37.889 00:42:37.889 ' 00:42:37.889 23:05:40 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:37.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:37.889 --rc genhtml_branch_coverage=1 00:42:37.889 --rc genhtml_function_coverage=1 00:42:37.889 --rc genhtml_legend=1 00:42:37.889 --rc geninfo_all_blocks=1 00:42:37.889 --rc geninfo_unexecuted_blocks=1 00:42:37.889 00:42:37.889 ' 00:42:37.889 23:05:40 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:37.889 23:05:40 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:37.889 23:05:40 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:37.889 23:05:40 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:37.889 23:05:40 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:37.889 23:05:40 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:37.889 23:05:40 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:37.889 23:05:40 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:37.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:37.889 23:05:40 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:37.889 23:05:40 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:37.889 23:05:40 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:37.889 23:05:40 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:37.889 23:05:40 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:37.889 23:05:40 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:37.889 23:05:40 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:37.889 23:05:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:37.889 23:05:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:37.889 23:05:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:37.889 23:05:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:37.889 23:05:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:37.889 23:05:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2DQSvaRgjY 00:42:37.889 23:05:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:37.889 23:05:40 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:37.889 23:05:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2DQSvaRgjY 00:42:37.889 23:05:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2DQSvaRgjY 00:42:37.890 23:05:40 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.2DQSvaRgjY 00:42:37.890 23:05:40 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:37.890 23:05:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:37.890 23:05:40 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:37.890 23:05:40 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:37.890 23:05:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:37.890 23:05:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:37.890 23:05:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yuBYAeRgUb 00:42:37.890 23:05:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:37.890 23:05:40 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:37.890 23:05:40 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:37.890 23:05:40 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:37.890 23:05:40 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:42:37.890 23:05:40 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:37.890 23:05:40 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:37.890 23:05:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yuBYAeRgUb 00:42:37.890 23:05:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yuBYAeRgUb 00:42:37.890 23:05:40 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yuBYAeRgUb 00:42:37.890 23:05:40 keyring_file -- keyring/file.sh@30 -- # tgtpid=478437 00:42:37.890 23:05:40 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:37.890 23:05:40 keyring_file -- keyring/file.sh@32 -- # waitforlisten 478437 00:42:37.890 23:05:40 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 478437 ']' 00:42:37.890 23:05:40 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:37.890 23:05:40 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:37.890 23:05:40 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:37.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:37.890 23:05:40 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:37.890 23:05:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:37.890 [2024-10-11 23:05:40.994024] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:42:37.890 [2024-10-11 23:05:40.994111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478437 ] 00:42:37.890 [2024-10-11 23:05:41.051700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:37.890 [2024-10-11 23:05:41.095753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:38.148 23:05:41 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:38.148 [2024-10-11 23:05:41.351562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:38.148 null0 00:42:38.148 [2024-10-11 23:05:41.383608] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:38.148 [2024-10-11 23:05:41.384111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:38.148 23:05:41 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:38.148 23:05:41 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:38.149 23:05:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:38.149 [2024-10-11 23:05:41.407636] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:38.149 request: 00:42:38.149 { 00:42:38.149 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:38.149 "secure_channel": false, 00:42:38.149 "listen_address": { 00:42:38.149 "trtype": "tcp", 00:42:38.149 "traddr": "127.0.0.1", 00:42:38.149 "trsvcid": "4420" 00:42:38.149 }, 00:42:38.149 "method": "nvmf_subsystem_add_listener", 00:42:38.149 "req_id": 1 00:42:38.149 } 00:42:38.149 Got JSON-RPC error response 00:42:38.149 response: 00:42:38.149 { 00:42:38.149 "code": -32602, 00:42:38.149 "message": "Invalid parameters" 00:42:38.149 } 00:42:38.149 23:05:41 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:38.149 23:05:41 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:38.149 23:05:41 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:38.149 23:05:41 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:38.149 23:05:41 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:38.149 23:05:41 keyring_file -- keyring/file.sh@47 -- # bperfpid=478444 00:42:38.149 23:05:41 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:38.149 23:05:41 keyring_file -- keyring/file.sh@49 -- # waitforlisten 478444 /var/tmp/bperf.sock 00:42:38.149 23:05:41 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 478444 ']' 00:42:38.149 23:05:41 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:38.149 23:05:41 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:38.149 23:05:41 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:38.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:38.149 23:05:41 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:38.149 23:05:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:38.407 [2024-10-11 23:05:41.455620] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:42:38.407 [2024-10-11 23:05:41.455698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478444 ] 00:42:38.407 [2024-10-11 23:05:41.512666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:38.407 [2024-10-11 23:05:41.558160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:38.665 23:05:41 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:38.665 23:05:41 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:38.665 23:05:41 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2DQSvaRgjY 00:42:38.665 23:05:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2DQSvaRgjY 00:42:38.922 23:05:41 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yuBYAeRgUb 00:42:38.922 23:05:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yuBYAeRgUb 00:42:39.181 23:05:42 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:39.181 23:05:42 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:39.181 23:05:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:39.181 23:05:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.181 23:05:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:39.439 23:05:42 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.2DQSvaRgjY == \/\t\m\p\/\t\m\p\.\2\D\Q\S\v\a\R\g\j\Y ]] 00:42:39.439 23:05:42 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:39.439 23:05:42 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:39.439 23:05:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:39.439 23:05:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.439 23:05:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:39.697 23:05:42 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.yuBYAeRgUb == \/\t\m\p\/\t\m\p\.\y\u\B\Y\A\e\R\g\U\b ]] 00:42:39.697 23:05:42 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:39.697 23:05:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:39.697 23:05:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:39.697 23:05:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:39.697 23:05:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.697 23:05:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:39.955 23:05:43 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:39.955 23:05:43 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:39.955 23:05:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:39.955 23:05:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:39.955 23:05:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:39.955 23:05:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.955 23:05:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:40.213 23:05:43 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:40.213 23:05:43 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:40.214 23:05:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:40.472 [2024-10-11 23:05:43.587771] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:40.472 nvme0n1 00:42:40.472 23:05:43 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:40.472 23:05:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:40.472 23:05:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.472 23:05:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.472 23:05:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.472 23:05:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:40.730 23:05:43 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:40.730 23:05:43 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:40.730 23:05:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:40.730 23:05:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.730 23:05:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.730 23:05:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.730 23:05:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:40.988 23:05:44 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:40.988 23:05:44 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:41.247 Running I/O for 1 seconds... 00:42:42.183 10193.00 IOPS, 39.82 MiB/s 00:42:42.183 Latency(us) 00:42:42.183 [2024-10-11T21:05:45.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:42.183 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:42.183 nvme0n1 : 1.01 10244.56 40.02 0.00 0.00 12455.95 5437.06 21068.61 00:42:42.183 [2024-10-11T21:05:45.451Z] =================================================================================================================== 00:42:42.183 [2024-10-11T21:05:45.451Z] Total : 10244.56 40.02 0.00 0.00 12455.95 5437.06 21068.61 00:42:42.183 { 00:42:42.183 "results": [ 00:42:42.183 { 00:42:42.183 "job": "nvme0n1", 00:42:42.183 "core_mask": "0x2", 00:42:42.183 "workload": "randrw", 00:42:42.183 "percentage": 50, 00:42:42.183 "status": "finished", 00:42:42.183 "queue_depth": 128, 00:42:42.183 "io_size": 4096, 00:42:42.183 "runtime": 1.007559, 00:42:42.183 "iops": 10244.561360674661, 00:42:42.183 "mibps": 40.017817815135395, 00:42:42.183 "io_failed": 0, 00:42:42.183 "io_timeout": 0, 00:42:42.183 "avg_latency_us": 12455.946964053766, 00:42:42.183 "min_latency_us": 5437.060740740741, 00:42:42.183 "max_latency_us": 21068.61037037037 00:42:42.183 } 00:42:42.183 ], 00:42:42.183 "core_count": 1 00:42:42.183 } 00:42:42.183 23:05:45 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:42.183 23:05:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:42.442 23:05:45 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:42.442 23:05:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:42.442 23:05:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:42.442 23:05:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:42.442 23:05:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:42.442 23:05:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:42.719 23:05:45 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:42.719 23:05:45 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:42.719 23:05:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:42.719 23:05:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:42.719 23:05:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:42.719 23:05:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:42.719 23:05:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:42.981 23:05:46 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:42.981 23:05:46 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:42.981 23:05:46 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:42.981 23:05:46 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:42.982 23:05:46 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:42.982 23:05:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:42.982 23:05:46 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:42.982 23:05:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:42.982 23:05:46 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:42.982 23:05:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:43.240 [2024-10-11 23:05:46.461279] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:43.240 [2024-10-11 23:05:46.461805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c3f60 (107): Transport endpoint is not connected 00:42:43.240 [2024-10-11 23:05:46.462795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c3f60 (9): Bad file descriptor 00:42:43.240 [2024-10-11 23:05:46.463794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.240 [2024-10-11 23:05:46.463825] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:43.240 [2024-10-11 23:05:46.463840] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:43.240 [2024-10-11 23:05:46.463855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.240 request: 00:42:43.240 { 00:42:43.240 "name": "nvme0", 00:42:43.240 "trtype": "tcp", 00:42:43.240 "traddr": "127.0.0.1", 00:42:43.240 "adrfam": "ipv4", 00:42:43.240 "trsvcid": "4420", 00:42:43.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:43.240 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:43.240 "prchk_reftag": false, 00:42:43.240 "prchk_guard": false, 00:42:43.240 "hdgst": false, 00:42:43.240 "ddgst": false, 00:42:43.240 "psk": "key1", 00:42:43.240 "allow_unrecognized_csi": false, 00:42:43.240 "method": "bdev_nvme_attach_controller", 00:42:43.240 "req_id": 1 00:42:43.240 } 00:42:43.240 Got JSON-RPC error response 00:42:43.240 response: 00:42:43.240 { 00:42:43.240 "code": -5, 00:42:43.240 "message": "Input/output error" 00:42:43.240 } 00:42:43.240 23:05:46 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:43.240 23:05:46 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:43.240 23:05:46 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:43.240 23:05:46 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:43.240 23:05:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:43.240 23:05:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:43.240 23:05:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:43.240 23:05:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:43.240 23:05:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.240 23:05:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:43.498 23:05:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:43.498 23:05:46 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:43.498 23:05:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:43.498 23:05:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:43.498 23:05:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:43.498 23:05:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:43.498 23:05:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.756 23:05:47 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:43.756 23:05:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:44.014 23:05:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:44.272 23:05:47 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:44.272 23:05:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:44.530 23:05:47 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:44.530 23:05:47 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:44.530 23:05:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:44.794 23:05:47 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:44.794 23:05:47 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.2DQSvaRgjY 00:42:44.794 23:05:47 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.2DQSvaRgjY 00:42:44.794 23:05:47 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:44.794 23:05:47 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.2DQSvaRgjY 00:42:44.794 23:05:47 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:44.794 23:05:47 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:44.794 23:05:47 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:44.794 23:05:47 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:44.794 23:05:47 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2DQSvaRgjY 00:42:44.794 23:05:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2DQSvaRgjY 00:42:45.103 [2024-10-11 23:05:48.093931] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2DQSvaRgjY': 0100660 00:42:45.103 [2024-10-11 23:05:48.093963] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:45.103 request: 00:42:45.103 { 00:42:45.103 "name": "key0", 00:42:45.103 "path": "/tmp/tmp.2DQSvaRgjY", 00:42:45.103 "method": "keyring_file_add_key", 00:42:45.103 "req_id": 1 00:42:45.103 } 00:42:45.103 Got JSON-RPC error response 00:42:45.103 response: 00:42:45.103 { 00:42:45.103 "code": -1, 00:42:45.103 "message": "Operation not permitted" 00:42:45.103 } 00:42:45.103 23:05:48 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:45.103 23:05:48 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:45.103 23:05:48 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:45.103 23:05:48 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:45.103 23:05:48 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.2DQSvaRgjY 00:42:45.103 23:05:48 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2DQSvaRgjY 00:42:45.103 23:05:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2DQSvaRgjY 00:42:45.432 23:05:48 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.2DQSvaRgjY 00:42:45.432 23:05:48 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:45.432 23:05:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:45.432 23:05:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:45.432 23:05:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:45.432 23:05:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:45.432 23:05:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:45.432 23:05:48 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:45.432 23:05:48 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:45.432 23:05:48 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:45.432 23:05:48 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:45.432 23:05:48 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:45.432 23:05:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:45.432 23:05:48 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:45.432 23:05:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:45.432 23:05:48 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:45.432 23:05:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:45.730 [2024-10-11 23:05:48.920181] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.2DQSvaRgjY': No such file or directory 00:42:45.730 [2024-10-11 23:05:48.920213] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:45.730 [2024-10-11 23:05:48.920247] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:45.730 [2024-10-11 23:05:48.920259] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:45.730 [2024-10-11 23:05:48.920272] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:45.730 [2024-10-11 23:05:48.920283] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:45.730 request: 00:42:45.730 { 00:42:45.730 "name": "nvme0", 00:42:45.730 "trtype": "tcp", 00:42:45.730 "traddr": "127.0.0.1", 00:42:45.730 "adrfam": "ipv4", 00:42:45.730 "trsvcid": "4420", 00:42:45.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:45.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:45.730 "prchk_reftag": false, 00:42:45.730 "prchk_guard": false, 00:42:45.730 "hdgst": false, 00:42:45.730 "ddgst": false, 00:42:45.730 "psk": "key0", 00:42:45.730 "allow_unrecognized_csi": false, 00:42:45.730 "method": "bdev_nvme_attach_controller", 00:42:45.730 "req_id": 1 00:42:45.730 } 00:42:45.730 Got JSON-RPC error response 00:42:45.730 response: 00:42:45.730 { 00:42:45.731 "code": -19, 00:42:45.731 "message": "No such device" 00:42:45.731 } 00:42:45.731 23:05:48 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:45.731 23:05:48 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:45.731 23:05:48 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:45.731 23:05:48 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:45.731 23:05:48 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:45.731 23:05:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:45.989 23:05:49 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:45.989 23:05:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:45.989 23:05:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:45.989 23:05:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:45.989 23:05:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:45.989 23:05:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:45.989 23:05:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YxhlUke7jC 00:42:45.989 23:05:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:45.989 23:05:49 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:45.989 23:05:49 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:45.989 23:05:49 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:45.989 23:05:49 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:45.989 23:05:49 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:45.989 23:05:49 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:45.989 23:05:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YxhlUke7jC 00:42:45.989 23:05:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YxhlUke7jC 00:42:45.989 23:05:49 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.YxhlUke7jC 00:42:45.989 23:05:49 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YxhlUke7jC 00:42:45.989 23:05:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YxhlUke7jC 00:42:46.555 23:05:49 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:46.555 23:05:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:46.814 nvme0n1 00:42:46.814 23:05:49 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:46.814 23:05:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:46.814 23:05:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:46.814 23:05:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:46.814 23:05:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.814 23:05:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:47.072 23:05:50 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:47.072 23:05:50 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:47.072 23:05:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:47.330 23:05:50 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:47.330 23:05:50 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:47.330 23:05:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:47.330 23:05:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:47.330 23:05:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:47.587 23:05:50 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:47.587 23:05:50 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:47.587 23:05:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:47.587 23:05:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:47.587 23:05:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:47.587 23:05:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:47.587 23:05:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:47.845 23:05:51 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:47.845 23:05:51 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:47.845 23:05:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:48.103 23:05:51 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:48.103 23:05:51 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:48.103 23:05:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:48.362 23:05:51 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:48.362 23:05:51 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YxhlUke7jC 00:42:48.362 23:05:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YxhlUke7jC 00:42:48.620 23:05:51 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yuBYAeRgUb 00:42:48.620 23:05:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yuBYAeRgUb 00:42:48.878 23:05:52 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:48.878 23:05:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:49.444 nvme0n1 00:42:49.444 23:05:52 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:49.444 23:05:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:49.704 23:05:52 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:49.704 "subsystems": [ 00:42:49.704 { 00:42:49.704 "subsystem": "keyring", 00:42:49.704 "config": [ 00:42:49.704 { 00:42:49.704 "method": "keyring_file_add_key", 00:42:49.704 "params": { 00:42:49.704 "name": "key0", 00:42:49.704 "path": "/tmp/tmp.YxhlUke7jC" 00:42:49.704 } 00:42:49.704 }, 00:42:49.704 { 00:42:49.704 "method": "keyring_file_add_key", 00:42:49.704 "params": { 00:42:49.704 "name": "key1", 00:42:49.704 "path": "/tmp/tmp.yuBYAeRgUb" 00:42:49.704 } 00:42:49.704 } 00:42:49.704 ] 00:42:49.704 }, 00:42:49.704 { 00:42:49.704 "subsystem": "iobuf", 00:42:49.704 "config": [ 00:42:49.704 { 00:42:49.704 "method": "iobuf_set_options", 00:42:49.704 "params": { 00:42:49.704 "small_pool_count": 8192, 00:42:49.704 "large_pool_count": 1024, 00:42:49.704 "small_bufsize": 8192, 00:42:49.704 "large_bufsize": 135168 00:42:49.704 } 00:42:49.704 } 00:42:49.704 ] 00:42:49.704 }, 00:42:49.704 { 00:42:49.704 "subsystem": "sock", 00:42:49.704 "config": [ 00:42:49.704 { 00:42:49.704 "method": "sock_set_default_impl", 00:42:49.704 "params": { 00:42:49.704 "impl_name": "posix" 00:42:49.704 } 00:42:49.704 }, 00:42:49.704 { 00:42:49.704 "method": "sock_impl_set_options", 00:42:49.704 "params": { 00:42:49.704 "impl_name": "ssl", 00:42:49.704 "recv_buf_size": 4096, 00:42:49.704 "send_buf_size": 4096, 00:42:49.704 "enable_recv_pipe": true, 00:42:49.704 "enable_quickack": false, 00:42:49.704 "enable_placement_id": 0, 00:42:49.704 "enable_zerocopy_send_server": true, 00:42:49.704 "enable_zerocopy_send_client": false, 00:42:49.704 "zerocopy_threshold": 0, 00:42:49.704 "tls_version": 0, 00:42:49.704 "enable_ktls": false 00:42:49.704 } 00:42:49.704 }, 00:42:49.704 { 00:42:49.704 "method": "sock_impl_set_options", 00:42:49.704 "params": { 00:42:49.704 "impl_name": "posix", 00:42:49.704 "recv_buf_size": 2097152, 00:42:49.704 "send_buf_size": 2097152, 00:42:49.704 "enable_recv_pipe": true, 00:42:49.704 "enable_quickack": false, 00:42:49.704 "enable_placement_id": 0, 00:42:49.704 "enable_zerocopy_send_server": true, 00:42:49.704 "enable_zerocopy_send_client": false, 00:42:49.704 "zerocopy_threshold": 0, 00:42:49.704 "tls_version": 0, 00:42:49.704 "enable_ktls": false 00:42:49.704 } 00:42:49.704 } 00:42:49.704 ] 00:42:49.704 }, 00:42:49.704 { 00:42:49.704 "subsystem": "vmd", 00:42:49.704 "config": [] 00:42:49.704 }, 00:42:49.704 { 00:42:49.704 "subsystem": "accel", 00:42:49.704 "config": [ 00:42:49.704 { 00:42:49.704 "method": "accel_set_options", 00:42:49.704 "params": { 00:42:49.704 "small_cache_size": 128, 00:42:49.704 "large_cache_size": 16, 00:42:49.704 "task_count": 2048, 00:42:49.704 "sequence_count": 2048, 00:42:49.704 "buf_count": 2048 00:42:49.704 } 00:42:49.704 } 00:42:49.704 ] 00:42:49.704 }, 00:42:49.704 { 00:42:49.704 "subsystem": "bdev", 00:42:49.704 "config": [ 00:42:49.704 { 00:42:49.704 "method": "bdev_set_options", 00:42:49.704 "params": { 00:42:49.704 "bdev_io_pool_size": 65535, 00:42:49.704 "bdev_io_cache_size": 256, 00:42:49.704 "bdev_auto_examine": true, 00:42:49.704 "iobuf_small_cache_size": 128, 00:42:49.704 "iobuf_large_cache_size": 16 00:42:49.704 } 00:42:49.704 }, 00:42:49.704 { 00:42:49.704 "method": "bdev_raid_set_options", 00:42:49.704 "params": { 00:42:49.704 "process_window_size_kb": 1024, 00:42:49.704 "process_max_bandwidth_mb_sec": 0 00:42:49.704 } 00:42:49.704 }, 00:42:49.704 { 00:42:49.704 "method": "bdev_iscsi_set_options", 00:42:49.704 "params": { 00:42:49.704 "timeout_sec": 30 00:42:49.704 } 00:42:49.704 }, 00:42:49.704 { 00:42:49.704 "method": "bdev_nvme_set_options", 00:42:49.704 "params": { 00:42:49.705 "action_on_timeout": "none", 00:42:49.705 "timeout_us": 0, 00:42:49.705 "timeout_admin_us": 0, 00:42:49.705 "keep_alive_timeout_ms": 10000, 00:42:49.705 "arbitration_burst": 0, 00:42:49.705 "low_priority_weight": 0, 00:42:49.705 "medium_priority_weight": 0, 00:42:49.705 "high_priority_weight": 0, 00:42:49.705 "nvme_adminq_poll_period_us": 10000, 00:42:49.705 "nvme_ioq_poll_period_us": 0, 00:42:49.705 "io_queue_requests": 512, 00:42:49.705 "delay_cmd_submit": true, 00:42:49.705 "transport_retry_count": 4, 00:42:49.705 "bdev_retry_count": 3, 00:42:49.705 "transport_ack_timeout": 0, 00:42:49.705 "ctrlr_loss_timeout_sec": 0, 00:42:49.705 "reconnect_delay_sec": 0, 00:42:49.705 "fast_io_fail_timeout_sec": 0, 00:42:49.705 "disable_auto_failback": false, 00:42:49.705 "generate_uuids": false, 00:42:49.705 "transport_tos": 0, 00:42:49.705 "nvme_error_stat": false, 00:42:49.705 "rdma_srq_size": 0, 00:42:49.705 "io_path_stat": false, 00:42:49.705 "allow_accel_sequence": false, 00:42:49.705 "rdma_max_cq_size": 0, 00:42:49.705 "rdma_cm_event_timeout_ms": 0, 00:42:49.705 "dhchap_digests": [ 00:42:49.705 "sha256", 00:42:49.705 "sha384", 00:42:49.705 "sha512" 00:42:49.705 ], 00:42:49.705 "dhchap_dhgroups": [ 00:42:49.705 "null", 00:42:49.705 "ffdhe2048", 00:42:49.705 "ffdhe3072", 00:42:49.705 "ffdhe4096", 00:42:49.705 "ffdhe6144", 00:42:49.705 "ffdhe8192" 00:42:49.705 ] 00:42:49.705 } 00:42:49.705 }, 00:42:49.705 { 00:42:49.705 "method": "bdev_nvme_attach_controller", 00:42:49.705 "params": { 00:42:49.705 "name": "nvme0", 00:42:49.705 "trtype": "TCP", 00:42:49.705 "adrfam": "IPv4", 00:42:49.705 "traddr": "127.0.0.1", 00:42:49.705 "trsvcid": "4420", 00:42:49.705 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:49.705 "prchk_reftag": false, 00:42:49.705 "prchk_guard": false, 00:42:49.705 "ctrlr_loss_timeout_sec": 0, 00:42:49.705 "reconnect_delay_sec": 0, 00:42:49.705 "fast_io_fail_timeout_sec": 0, 00:42:49.705 "psk": "key0", 00:42:49.705 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:49.705 "hdgst": false, 00:42:49.705 "ddgst": false, 00:42:49.705 "multipath": "multipath" 00:42:49.705 } 00:42:49.705 }, 00:42:49.705 { 00:42:49.705 "method": "bdev_nvme_set_hotplug", 00:42:49.705 "params": { 00:42:49.705 "period_us": 100000, 00:42:49.705 "enable": false 00:42:49.705 } 00:42:49.705 }, 00:42:49.705 { 00:42:49.705 "method": "bdev_wait_for_examine" 00:42:49.705 } 00:42:49.705 ] 00:42:49.705 }, 00:42:49.705 { 00:42:49.705 "subsystem": "nbd", 00:42:49.705 "config": [] 00:42:49.705 } 00:42:49.705 ] 00:42:49.705 }' 00:42:49.705 23:05:52 keyring_file -- keyring/file.sh@115 -- # killprocess 478444 00:42:49.705 23:05:52 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 478444 ']' 00:42:49.705 23:05:52 keyring_file -- common/autotest_common.sh@954 -- # kill -0 478444 00:42:49.705 23:05:52 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:49.705 23:05:52 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:49.705 23:05:52 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 478444 00:42:49.705 23:05:52 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:49.705 23:05:52 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:49.705 23:05:52 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 478444' 00:42:49.705 killing process with pid 478444 00:42:49.705 23:05:52 keyring_file -- common/autotest_common.sh@969 -- # kill 478444 00:42:49.705 Received shutdown signal, test time was about 1.000000 seconds 00:42:49.705 00:42:49.705 Latency(us) 00:42:49.705 [2024-10-11T21:05:52.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:49.705 [2024-10-11T21:05:52.973Z] =================================================================================================================== 00:42:49.705 [2024-10-11T21:05:52.973Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:49.705 23:05:52 keyring_file -- common/autotest_common.sh@974 -- # wait 478444 00:42:49.964 23:05:52 keyring_file -- keyring/file.sh@118 -- # bperfpid=479909 00:42:49.964 23:05:52 keyring_file -- keyring/file.sh@120 -- # waitforlisten 479909 /var/tmp/bperf.sock 00:42:49.964 23:05:52 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 479909 ']' 00:42:49.964 23:05:52 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:49.964 23:05:52 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:49.964 23:05:52 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:49.964 23:05:52 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:49.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:49.964 23:05:52 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:49.964 "subsystems": [ 00:42:49.964 { 00:42:49.964 "subsystem": "keyring", 00:42:49.964 "config": [ 00:42:49.964 { 00:42:49.964 "method": "keyring_file_add_key", 00:42:49.964 "params": { 00:42:49.964 "name": "key0", 00:42:49.964 "path": "/tmp/tmp.YxhlUke7jC" 00:42:49.964 } 00:42:49.964 }, 00:42:49.964 { 00:42:49.964 "method": "keyring_file_add_key", 00:42:49.964 "params": { 00:42:49.964 "name": "key1", 00:42:49.964 "path": "/tmp/tmp.yuBYAeRgUb" 00:42:49.964 } 00:42:49.964 } 00:42:49.964 ] 00:42:49.964 }, 00:42:49.964 { 00:42:49.964 "subsystem": "iobuf", 00:42:49.964 "config": [ 00:42:49.964 { 00:42:49.964 "method": "iobuf_set_options", 00:42:49.964 "params": { 00:42:49.964 "small_pool_count": 8192, 00:42:49.964 "large_pool_count": 1024, 00:42:49.964 "small_bufsize": 8192, 00:42:49.964 "large_bufsize": 135168 00:42:49.964 } 00:42:49.964 } 00:42:49.964 ] 00:42:49.964 }, 00:42:49.964 { 00:42:49.964 "subsystem": "sock", 00:42:49.964 "config": [ 00:42:49.964 { 00:42:49.964 "method": "sock_set_default_impl", 00:42:49.964 "params": { 00:42:49.964 "impl_name": "posix" 00:42:49.964 } 00:42:49.964 }, 00:42:49.964 { 00:42:49.964 "method": "sock_impl_set_options", 00:42:49.964 "params": { 00:42:49.964 "impl_name": "ssl", 00:42:49.964 "recv_buf_size": 4096, 00:42:49.964 "send_buf_size": 4096, 00:42:49.964 "enable_recv_pipe": true, 00:42:49.964 "enable_quickack": false, 00:42:49.964 "enable_placement_id": 0, 00:42:49.964 "enable_zerocopy_send_server": true, 00:42:49.964 "enable_zerocopy_send_client": false, 00:42:49.964 "zerocopy_threshold": 0, 00:42:49.964 "tls_version": 0, 00:42:49.964 "enable_ktls": false 00:42:49.964 } 00:42:49.964 }, 00:42:49.965 { 00:42:49.965 "method": "sock_impl_set_options", 00:42:49.965 "params": { 00:42:49.965 "impl_name": "posix", 00:42:49.965 "recv_buf_size": 2097152, 00:42:49.965 "send_buf_size": 2097152, 00:42:49.965 "enable_recv_pipe": true, 00:42:49.965 "enable_quickack": false, 00:42:49.965 "enable_placement_id": 0, 00:42:49.965 "enable_zerocopy_send_server": true, 00:42:49.965 "enable_zerocopy_send_client": false, 00:42:49.965 "zerocopy_threshold": 0, 00:42:49.965 "tls_version": 0, 00:42:49.965 "enable_ktls": false 00:42:49.965 } 00:42:49.965 } 00:42:49.965 ] 00:42:49.965 }, 00:42:49.965 { 00:42:49.965 "subsystem": "vmd", 00:42:49.965 "config": [] 00:42:49.965 }, 00:42:49.965 { 00:42:49.965 "subsystem": "accel", 00:42:49.965 "config": [ 00:42:49.965 { 00:42:49.965 "method": "accel_set_options", 00:42:49.965 "params": { 00:42:49.965 "small_cache_size": 128, 00:42:49.965 "large_cache_size": 16, 00:42:49.965 "task_count": 2048, 00:42:49.965 "sequence_count": 2048, 00:42:49.965 "buf_count": 2048 00:42:49.965 } 00:42:49.965 } 00:42:49.965 ] 00:42:49.965 }, 00:42:49.965 { 00:42:49.965 "subsystem": "bdev", 00:42:49.965 "config": [ 00:42:49.965 { 00:42:49.965 "method": "bdev_set_options", 00:42:49.965 "params": { 00:42:49.965 "bdev_io_pool_size": 65535, 00:42:49.965 "bdev_io_cache_size": 256, 00:42:49.965 "bdev_auto_examine": true, 00:42:49.965 "iobuf_small_cache_size": 128, 00:42:49.965 "iobuf_large_cache_size": 16 00:42:49.965 } 00:42:49.965 }, 00:42:49.965 { 00:42:49.965 "method": "bdev_raid_set_options", 00:42:49.965 "params": { 00:42:49.965 "process_window_size_kb": 1024, 00:42:49.965 "process_max_bandwidth_mb_sec": 0 00:42:49.965 } 00:42:49.965 }, 00:42:49.965 { 00:42:49.965 "method": "bdev_iscsi_set_options", 00:42:49.965 "params": { 00:42:49.965 "timeout_sec": 30 00:42:49.965 } 00:42:49.965 }, 00:42:49.965 { 00:42:49.965 "method": "bdev_nvme_set_options", 00:42:49.965 "params": { 00:42:49.965 "action_on_timeout": "none", 00:42:49.965 "timeout_us": 0, 00:42:49.965 "timeout_admin_us": 0, 00:42:49.965 "keep_alive_timeout_ms": 10000, 00:42:49.965 "arbitration_burst": 0, 00:42:49.965 "low_priority_weight": 0, 00:42:49.965 "medium_priority_weight": 0, 00:42:49.965 "high_priority_weight": 0, 00:42:49.965 "nvme_adminq_poll_period_us": 10000, 00:42:49.965 "nvme_ioq_poll_period_us": 0, 00:42:49.965 "io_queue_requests": 512, 00:42:49.965 "delay_cmd_submit": true, 00:42:49.965 "transport_retry_count": 4, 00:42:49.965 "bdev_retry_count": 3, 00:42:49.965 "transport_ack_timeout": 0, 00:42:49.965 "ctrlr_loss_timeout_sec": 0, 00:42:49.965 "reconnect_delay_sec": 0, 00:42:49.965 "fast_io_fail_timeout_sec": 0, 00:42:49.965 "disable_auto_failback": false, 00:42:49.965 "generate_uuids": false, 00:42:49.965 "transport_tos": 0, 00:42:49.965 "nvme_error_stat": false, 00:42:49.965 "rdma_srq_size": 0, 00:42:49.965 "io_path_stat": false, 00:42:49.965 "allow_accel_sequence": false, 00:42:49.965 "rdma_max_cq_size": 0, 00:42:49.965 "rdma_cm_event_timeout_ms": 0, 00:42:49.965 "dhchap_digests": [ 00:42:49.965 "sha256", 00:42:49.965 "sha384", 00:42:49.965 "sha512" 00:42:49.965 ], 00:42:49.965 "dhchap_dhgroups": [ 00:42:49.965 "null", 00:42:49.965 "ffdhe2048", 00:42:49.965 "ffdhe3072", 00:42:49.965 "ffdhe4096", 00:42:49.965 "ffdhe6144", 00:42:49.965 "ffdhe8192" 00:42:49.965 ] 00:42:49.965 } 00:42:49.965 }, 00:42:49.965 { 00:42:49.965 "method": "bdev_nvme_attach_controller", 00:42:49.965 "params": { 00:42:49.965 "name": "nvme0", 00:42:49.965 "trtype": "TCP", 00:42:49.965 "adrfam": "IPv4", 00:42:49.965 "traddr": "127.0.0.1", 00:42:49.965 "trsvcid": "4420", 00:42:49.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:49.965 "prchk_reftag": false, 00:42:49.965 "prchk_guard": false, 00:42:49.965 "ctrlr_loss_timeout_sec": 0, 00:42:49.965 "reconnect_delay_sec": 0, 00:42:49.965 "fast_io_fail_timeout_sec": 0, 00:42:49.965 "psk": "key0", 00:42:49.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:49.965 "hdgst": false, 00:42:49.965 "ddgst": false, 00:42:49.965 "multipath": "multipath" 00:42:49.965 } 00:42:49.965 }, 00:42:49.965 { 00:42:49.965 "method": "bdev_nvme_set_hotplug", 00:42:49.965 "params": { 00:42:49.965 "period_us": 100000, 00:42:49.965 "enable": false 00:42:49.965 } 00:42:49.965 }, 00:42:49.965 { 00:42:49.965 "method": "bdev_wait_for_examine" 00:42:49.965 } 00:42:49.965 ] 00:42:49.965 }, 00:42:49.965 { 00:42:49.965 "subsystem": "nbd", 00:42:49.965 "config": [] 00:42:49.965 } 00:42:49.965 ] 00:42:49.965 }' 00:42:49.965 23:05:52 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:49.965 23:05:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:49.965 [2024-10-11 23:05:53.026184] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:42:49.965 [2024-10-11 23:05:53.026280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479909 ] 00:42:49.965 [2024-10-11 23:05:53.089631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:49.965 [2024-10-11 23:05:53.139963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:50.223 [2024-10-11 23:05:53.321004] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:50.223 23:05:53 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:50.223 23:05:53 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:50.223 23:05:53 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:50.223 23:05:53 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:50.223 23:05:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:50.481 23:05:53 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:50.481 23:05:53 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:50.481 23:05:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:50.481 23:05:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:50.481 23:05:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:50.481 23:05:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:50.481 23:05:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:50.740 23:05:53 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:50.740 23:05:53 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:50.740 23:05:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:50.740 23:05:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:50.740 23:05:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:50.740 23:05:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:50.740 23:05:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:51.307 23:05:54 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:51.307 23:05:54 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:51.307 23:05:54 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:51.307 23:05:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:51.307 23:05:54 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:51.307 23:05:54 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:51.307 23:05:54 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.YxhlUke7jC /tmp/tmp.yuBYAeRgUb 00:42:51.307 23:05:54 keyring_file -- keyring/file.sh@20 -- # killprocess 479909 00:42:51.307 23:05:54 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 479909 ']' 00:42:51.307 23:05:54 keyring_file -- common/autotest_common.sh@954 -- # kill -0 479909 00:42:51.307 23:05:54 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:51.307 23:05:54 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:51.307 23:05:54 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 479909 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 479909' 00:42:51.565 killing process with pid 479909 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@969 -- # kill 479909 00:42:51.565 Received shutdown signal, test time was about 1.000000 seconds 00:42:51.565 00:42:51.565 Latency(us) 00:42:51.565 [2024-10-11T21:05:54.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.565 [2024-10-11T21:05:54.833Z] =================================================================================================================== 00:42:51.565 [2024-10-11T21:05:54.833Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@974 -- # wait 479909 00:42:51.565 23:05:54 keyring_file -- keyring/file.sh@21 -- # killprocess 478437 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 478437 ']' 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@954 -- # kill -0 478437 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 478437 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 478437' 00:42:51.565 killing process with pid 478437 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@969 -- # kill 478437 00:42:51.565 23:05:54 keyring_file -- common/autotest_common.sh@974 -- # wait 478437 00:42:52.133 00:42:52.133 real 0m14.426s 00:42:52.133 user 0m36.976s 00:42:52.133 sys 0m3.202s 00:42:52.133 23:05:55 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:52.133 23:05:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:52.133 ************************************ 00:42:52.133 END TEST keyring_file 00:42:52.133 ************************************ 00:42:52.133 23:05:55 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:42:52.133 23:05:55 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:52.133 23:05:55 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:52.133 23:05:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:52.133 23:05:55 -- common/autotest_common.sh@10 -- # set +x 00:42:52.133 ************************************ 00:42:52.133 START TEST keyring_linux 00:42:52.133 ************************************ 00:42:52.133 23:05:55 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:52.133 Joined session keyring: 277327650 00:42:52.133 * Looking for test storage... 00:42:52.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:52.133 23:05:55 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:52.133 23:05:55 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:42:52.133 23:05:55 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:52.133 23:05:55 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:52.133 23:05:55 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:52.133 23:05:55 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:52.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.133 --rc genhtml_branch_coverage=1 00:42:52.133 --rc genhtml_function_coverage=1 00:42:52.133 --rc genhtml_legend=1 00:42:52.133 --rc geninfo_all_blocks=1 00:42:52.133 --rc geninfo_unexecuted_blocks=1 00:42:52.133 00:42:52.133 ' 00:42:52.133 23:05:55 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:52.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.133 --rc genhtml_branch_coverage=1 00:42:52.133 --rc genhtml_function_coverage=1 00:42:52.133 --rc genhtml_legend=1 00:42:52.133 --rc geninfo_all_blocks=1 00:42:52.133 --rc geninfo_unexecuted_blocks=1 00:42:52.133 00:42:52.133 ' 00:42:52.133 23:05:55 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:52.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.133 --rc genhtml_branch_coverage=1 00:42:52.133 --rc genhtml_function_coverage=1 00:42:52.133 --rc genhtml_legend=1 00:42:52.133 --rc geninfo_all_blocks=1 00:42:52.133 --rc geninfo_unexecuted_blocks=1 00:42:52.133 00:42:52.133 ' 00:42:52.133 23:05:55 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:52.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.133 --rc genhtml_branch_coverage=1 00:42:52.133 --rc genhtml_function_coverage=1 00:42:52.133 --rc genhtml_legend=1 00:42:52.133 --rc geninfo_all_blocks=1 00:42:52.133 --rc geninfo_unexecuted_blocks=1 00:42:52.133 00:42:52.133 ' 00:42:52.133 23:05:55 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:52.133 23:05:55 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:52.133 23:05:55 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:52.133 23:05:55 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:52.134 23:05:55 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:52.134 23:05:55 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:52.134 23:05:55 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:52.134 23:05:55 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.134 23:05:55 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.134 23:05:55 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.134 23:05:55 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:52.134 23:05:55 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:52.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:52.134 23:05:55 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:52.134 23:05:55 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:52.134 23:05:55 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:52.134 23:05:55 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:52.134 23:05:55 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:52.134 23:05:55 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@731 -- # python - 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:52.134 /tmp/:spdk-test:key0 00:42:52.134 23:05:55 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:52.134 23:05:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:42:52.134 23:05:55 keyring_linux -- nvmf/common.sh@731 -- # python - 00:42:52.394 23:05:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:52.394 23:05:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:52.394 /tmp/:spdk-test:key1 00:42:52.394 23:05:55 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=480396 00:42:52.394 23:05:55 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:52.394 23:05:55 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 480396 00:42:52.394 23:05:55 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 480396 ']' 00:42:52.394 23:05:55 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:52.394 23:05:55 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:52.394 23:05:55 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:52.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:52.394 23:05:55 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:52.394 23:05:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:52.394 [2024-10-11 23:05:55.484926] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:42:52.394 [2024-10-11 23:05:55.485023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480396 ] 00:42:52.394 [2024-10-11 23:05:55.546304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:52.394 [2024-10-11 23:05:55.596330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:52.653 23:05:55 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:52.653 23:05:55 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:42:52.653 23:05:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:52.653 23:05:55 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.653 23:05:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:52.653 [2024-10-11 23:05:55.858676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:52.653 null0 00:42:52.653 [2024-10-11 23:05:55.890724] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:52.653 [2024-10-11 23:05:55.891214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:52.653 23:05:55 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.653 23:05:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:52.653 923689426 00:42:52.653 23:05:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:52.653 564967889 00:42:52.653 23:05:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=480402 00:42:52.653 23:05:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:52.653 23:05:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 480402 /var/tmp/bperf.sock 00:42:52.653 23:05:55 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 480402 ']' 00:42:52.653 23:05:55 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:52.653 23:05:55 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:52.653 23:05:55 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:52.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:52.653 23:05:55 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:52.653 23:05:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:52.913 [2024-10-11 23:05:55.956719] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 23.11.0 initialization... 00:42:52.913 [2024-10-11 23:05:55.956785] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480402 ] 00:42:52.913 [2024-10-11 23:05:56.013265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:52.913 [2024-10-11 23:05:56.057852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:53.172 23:05:56 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:53.172 23:05:56 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:42:53.172 23:05:56 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:53.172 23:05:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:53.429 23:05:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:53.429 23:05:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:53.687 23:05:56 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:53.687 23:05:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:53.945 [2024-10-11 23:05:57.062810] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:53.945 nvme0n1 00:42:53.945 23:05:57 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:53.945 23:05:57 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:53.945 23:05:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:53.945 23:05:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:53.945 23:05:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:53.945 23:05:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:54.203 23:05:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:54.203 23:05:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:54.203 23:05:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:54.203 23:05:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:54.203 23:05:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:54.203 23:05:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:54.203 23:05:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:54.461 23:05:57 keyring_linux -- keyring/linux.sh@25 -- # sn=923689426 00:42:54.461 23:05:57 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:54.461 23:05:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:54.461 23:05:57 keyring_linux -- keyring/linux.sh@26 -- # [[ 923689426 == \9\2\3\6\8\9\4\2\6 ]] 00:42:54.461 23:05:57 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 923689426 00:42:54.461 23:05:57 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:54.461 23:05:57 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:54.721 Running I/O for 1 seconds... 00:42:55.660 11316.00 IOPS, 44.20 MiB/s 00:42:55.660 Latency(us) 00:42:55.660 [2024-10-11T21:05:58.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:55.660 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:55.660 nvme0n1 : 1.01 11314.66 44.20 0.00 0.00 11241.02 8107.05 19320.98 00:42:55.660 [2024-10-11T21:05:58.928Z] =================================================================================================================== 00:42:55.660 [2024-10-11T21:05:58.928Z] Total : 11314.66 44.20 0.00 0.00 11241.02 8107.05 19320.98 00:42:55.660 { 00:42:55.660 "results": [ 00:42:55.660 { 00:42:55.660 "job": "nvme0n1", 00:42:55.660 "core_mask": "0x2", 00:42:55.660 "workload": "randread", 00:42:55.660 "status": "finished", 00:42:55.660 "queue_depth": 128, 00:42:55.660 "io_size": 4096, 00:42:55.660 "runtime": 1.01152, 00:42:55.660 "iops": 11314.655172413793, 00:42:55.660 "mibps": 44.19787176724138, 00:42:55.660 "io_failed": 0, 00:42:55.660 "io_timeout": 0, 00:42:55.660 "avg_latency_us": 11241.01619300034, 00:42:55.660 "min_latency_us": 8107.045925925926, 00:42:55.660 "max_latency_us": 19320.983703703703 00:42:55.660 } 00:42:55.660 ], 00:42:55.660 "core_count": 1 00:42:55.660 } 00:42:55.660 23:05:58 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:55.660 23:05:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:55.918 23:05:59 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:55.918 23:05:59 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:55.918 23:05:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:55.918 23:05:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:55.918 23:05:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:55.918 23:05:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:56.176 23:05:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:56.176 23:05:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:56.176 23:05:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:56.176 23:05:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:56.176 23:05:59 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:42:56.176 23:05:59 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:56.176 23:05:59 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:56.176 23:05:59 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:56.176 23:05:59 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:56.176 23:05:59 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:56.176 23:05:59 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:56.176 23:05:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:56.435 [2024-10-11 23:05:59.655286] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:56.435 [2024-10-11 23:05:59.655773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b9820 (107): Transport endpoint is not connected 00:42:56.435 [2024-10-11 23:05:59.656766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b9820 (9): Bad file descriptor 00:42:56.435 [2024-10-11 23:05:59.657764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:56.435 [2024-10-11 23:05:59.657784] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:56.435 [2024-10-11 23:05:59.657797] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:56.435 [2024-10-11 23:05:59.657811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:56.435 request: 00:42:56.435 { 00:42:56.435 "name": "nvme0", 00:42:56.435 "trtype": "tcp", 00:42:56.435 "traddr": "127.0.0.1", 00:42:56.435 "adrfam": "ipv4", 00:42:56.435 "trsvcid": "4420", 00:42:56.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:56.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:56.435 "prchk_reftag": false, 00:42:56.435 "prchk_guard": false, 00:42:56.435 "hdgst": false, 00:42:56.435 "ddgst": false, 00:42:56.435 "psk": ":spdk-test:key1", 00:42:56.435 "allow_unrecognized_csi": false, 00:42:56.435 "method": "bdev_nvme_attach_controller", 00:42:56.435 "req_id": 1 00:42:56.435 } 00:42:56.435 Got JSON-RPC error response 00:42:56.435 response: 00:42:56.435 { 00:42:56.435 "code": -5, 00:42:56.435 "message": "Input/output error" 00:42:56.435 } 00:42:56.435 23:05:59 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:42:56.435 23:05:59 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:56.435 23:05:59 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:56.435 23:05:59 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@33 -- # sn=923689426 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 923689426 00:42:56.435 1 links removed 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@33 -- # sn=564967889 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 564967889 00:42:56.435 1 links removed 00:42:56.435 23:05:59 keyring_linux -- keyring/linux.sh@41 -- # killprocess 480402 00:42:56.435 23:05:59 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 480402 ']' 00:42:56.435 23:05:59 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 480402 00:42:56.435 23:05:59 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:56.435 23:05:59 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:56.435 23:05:59 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 480402 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 480402' 00:42:56.694 killing process with pid 480402 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@969 -- # kill 480402 00:42:56.694 Received shutdown signal, test time was about 1.000000 seconds 00:42:56.694 00:42:56.694 Latency(us) 00:42:56.694 [2024-10-11T21:05:59.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:56.694 [2024-10-11T21:05:59.962Z] =================================================================================================================== 00:42:56.694 [2024-10-11T21:05:59.962Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@974 -- # wait 480402 00:42:56.694 23:05:59 keyring_linux -- keyring/linux.sh@42 -- # killprocess 480396 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 480396 ']' 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 480396 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 480396 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 480396' 00:42:56.694 killing process with pid 480396 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@969 -- # kill 480396 00:42:56.694 23:05:59 keyring_linux -- common/autotest_common.sh@974 -- # wait 480396 00:42:57.261 00:42:57.261 real 0m5.122s 00:42:57.261 user 0m10.201s 00:42:57.261 sys 0m1.625s 00:42:57.261 23:06:00 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:57.261 23:06:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:57.261 ************************************ 00:42:57.261 END TEST keyring_linux 00:42:57.261 ************************************ 00:42:57.261 23:06:00 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:42:57.261 23:06:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:57.261 23:06:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:57.261 23:06:00 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:42:57.261 23:06:00 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:42:57.261 23:06:00 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:42:57.261 23:06:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:57.261 23:06:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:57.261 23:06:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:57.261 23:06:00 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:42:57.261 23:06:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:57.261 23:06:00 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:42:57.261 23:06:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:57.261 23:06:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:57.261 23:06:00 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:42:57.261 23:06:00 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:42:57.261 23:06:00 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:42:57.261 23:06:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:57.261 23:06:00 -- common/autotest_common.sh@10 -- # set +x 00:42:57.261 23:06:00 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:42:57.261 23:06:00 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:42:57.261 23:06:00 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:42:57.261 23:06:00 -- common/autotest_common.sh@10 -- # set +x 00:42:59.163 INFO: APP EXITING 00:42:59.163 INFO: killing all VMs 00:42:59.163 INFO: killing vhost app 00:42:59.163 INFO: EXIT DONE 00:43:00.098 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:43:00.356 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:43:00.356 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:43:00.356 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:43:00.356 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:43:00.356 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:43:00.356 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:43:00.356 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:43:00.356 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:43:00.356 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:43:00.356 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:43:00.356 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:43:00.356 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:43:00.356 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:43:00.356 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:43:00.356 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:43:00.356 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:43:01.733 Cleaning 00:43:01.733 Removing: /var/run/dpdk/spdk0/config 00:43:01.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:01.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:01.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:01.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:01.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:01.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:01.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:01.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:01.733 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:01.733 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:01.733 Removing: /var/run/dpdk/spdk1/config 00:43:01.733 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:01.733 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:01.733 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:01.733 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:01.733 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:01.733 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:01.733 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:01.733 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:01.733 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:01.733 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:01.733 Removing: /var/run/dpdk/spdk2/config 00:43:01.733 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:01.733 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:01.733 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:01.733 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:01.733 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:01.733 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:01.733 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:01.733 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:01.733 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:01.733 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:01.733 Removing: /var/run/dpdk/spdk3/config 00:43:01.733 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:01.733 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:01.733 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:01.733 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:01.733 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:01.733 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:01.733 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:01.733 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:01.733 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:01.733 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:01.733 Removing: /var/run/dpdk/spdk4/config 00:43:01.733 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:01.733 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:01.733 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:01.733 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:01.733 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:01.733 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:01.733 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:01.733 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:01.733 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:01.733 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:01.733 Removing: /dev/shm/bdev_svc_trace.1 00:43:01.733 Removing: /dev/shm/nvmf_trace.0 00:43:01.734 Removing: /dev/shm/spdk_tgt_trace.pid99864 00:43:01.734 Removing: /var/run/dpdk/spdk0 00:43:01.734 Removing: /var/run/dpdk/spdk1 00:43:01.734 Removing: /var/run/dpdk/spdk2 00:43:01.734 Removing: /var/run/dpdk/spdk3 00:43:01.734 Removing: /var/run/dpdk/spdk4 00:43:01.734 Removing: /var/run/dpdk/spdk_pid100225 00:43:01.734 Removing: /var/run/dpdk/spdk_pid100913 00:43:01.734 Removing: /var/run/dpdk/spdk_pid101052 00:43:01.734 Removing: /var/run/dpdk/spdk_pid101770 00:43:01.734 Removing: /var/run/dpdk/spdk_pid101776 00:43:01.734 Removing: /var/run/dpdk/spdk_pid102036 00:43:01.734 Removing: /var/run/dpdk/spdk_pid103359 00:43:01.734 Removing: /var/run/dpdk/spdk_pid104290 00:43:01.734 Removing: /var/run/dpdk/spdk_pid104495 00:43:01.994 Removing: /var/run/dpdk/spdk_pid104797 00:43:01.994 Removing: /var/run/dpdk/spdk_pid105011 00:43:01.994 Removing: /var/run/dpdk/spdk_pid105210 00:43:01.994 Removing: /var/run/dpdk/spdk_pid105366 00:43:01.994 Removing: /var/run/dpdk/spdk_pid105527 00:43:01.994 Removing: /var/run/dpdk/spdk_pid105714 00:43:01.994 Removing: /var/run/dpdk/spdk_pid106024 00:43:01.994 Removing: /var/run/dpdk/spdk_pid109153 00:43:01.994 Removing: /var/run/dpdk/spdk_pid109318 00:43:01.994 Removing: /var/run/dpdk/spdk_pid109478 00:43:01.994 Removing: /var/run/dpdk/spdk_pid109483 00:43:01.994 Removing: /var/run/dpdk/spdk_pid109782 00:43:01.994 Removing: /var/run/dpdk/spdk_pid109908 00:43:01.994 Removing: /var/run/dpdk/spdk_pid110212 00:43:01.994 Removing: /var/run/dpdk/spdk_pid110222 00:43:01.994 Removing: /var/run/dpdk/spdk_pid110389 00:43:01.994 Removing: /var/run/dpdk/spdk_pid110521 00:43:01.994 Removing: /var/run/dpdk/spdk_pid110684 00:43:01.994 Removing: /var/run/dpdk/spdk_pid110690 00:43:01.994 Removing: /var/run/dpdk/spdk_pid111069 00:43:01.994 Removing: /var/run/dpdk/spdk_pid111221 00:43:01.994 Removing: /var/run/dpdk/spdk_pid111537 00:43:01.994 Removing: /var/run/dpdk/spdk_pid113656 00:43:01.994 Removing: /var/run/dpdk/spdk_pid116293 00:43:01.994 Removing: /var/run/dpdk/spdk_pid123290 00:43:01.994 Removing: /var/run/dpdk/spdk_pid123698 00:43:01.994 Removing: /var/run/dpdk/spdk_pid126218 00:43:01.994 Removing: /var/run/dpdk/spdk_pid126456 00:43:01.994 Removing: /var/run/dpdk/spdk_pid129022 00:43:01.994 Removing: /var/run/dpdk/spdk_pid132793 00:43:01.994 Removing: /var/run/dpdk/spdk_pid134941 00:43:01.994 Removing: /var/run/dpdk/spdk_pid141440 00:43:01.994 Removing: /var/run/dpdk/spdk_pid147289 00:43:01.994 Removing: /var/run/dpdk/spdk_pid148575 00:43:01.994 Removing: /var/run/dpdk/spdk_pid149243 00:43:01.994 Removing: /var/run/dpdk/spdk_pid159624 00:43:01.994 Removing: /var/run/dpdk/spdk_pid161793 00:43:01.994 Removing: /var/run/dpdk/spdk_pid217601 00:43:01.994 Removing: /var/run/dpdk/spdk_pid220792 00:43:01.994 Removing: /var/run/dpdk/spdk_pid224592 00:43:01.994 Removing: /var/run/dpdk/spdk_pid228445 00:43:01.994 Removing: /var/run/dpdk/spdk_pid228449 00:43:01.994 Removing: /var/run/dpdk/spdk_pid229103 00:43:01.994 Removing: /var/run/dpdk/spdk_pid229636 00:43:01.994 Removing: /var/run/dpdk/spdk_pid230296 00:43:01.994 Removing: /var/run/dpdk/spdk_pid230695 00:43:01.994 Removing: /var/run/dpdk/spdk_pid230698 00:43:01.994 Removing: /var/run/dpdk/spdk_pid230956 00:43:01.994 Removing: /var/run/dpdk/spdk_pid231094 00:43:01.994 Removing: /var/run/dpdk/spdk_pid231101 00:43:01.994 Removing: /var/run/dpdk/spdk_pid231749 00:43:01.994 Removing: /var/run/dpdk/spdk_pid232296 00:43:01.994 Removing: /var/run/dpdk/spdk_pid232948 00:43:01.994 Removing: /var/run/dpdk/spdk_pid233356 00:43:01.994 Removing: /var/run/dpdk/spdk_pid233473 00:43:01.994 Removing: /var/run/dpdk/spdk_pid233618 00:43:01.994 Removing: /var/run/dpdk/spdk_pid234650 00:43:01.994 Removing: /var/run/dpdk/spdk_pid235969 00:43:01.994 Removing: /var/run/dpdk/spdk_pid241297 00:43:01.994 Removing: /var/run/dpdk/spdk_pid269484 00:43:01.994 Removing: /var/run/dpdk/spdk_pid272407 00:43:01.994 Removing: /var/run/dpdk/spdk_pid273582 00:43:01.994 Removing: /var/run/dpdk/spdk_pid274902 00:43:01.994 Removing: /var/run/dpdk/spdk_pid275039 00:43:01.994 Removing: /var/run/dpdk/spdk_pid275180 00:43:01.994 Removing: /var/run/dpdk/spdk_pid275295 00:43:01.994 Removing: /var/run/dpdk/spdk_pid275765 00:43:01.994 Removing: /var/run/dpdk/spdk_pid277061 00:43:01.994 Removing: /var/run/dpdk/spdk_pid277817 00:43:01.994 Removing: /var/run/dpdk/spdk_pid278245 00:43:01.994 Removing: /var/run/dpdk/spdk_pid279731 00:43:01.994 Removing: /var/run/dpdk/spdk_pid280157 00:43:01.994 Removing: /var/run/dpdk/spdk_pid280600 00:43:01.994 Removing: /var/run/dpdk/spdk_pid283004 00:43:01.994 Removing: /var/run/dpdk/spdk_pid286513 00:43:01.994 Removing: /var/run/dpdk/spdk_pid286514 00:43:01.994 Removing: /var/run/dpdk/spdk_pid286515 00:43:01.994 Removing: /var/run/dpdk/spdk_pid289092 00:43:01.994 Removing: /var/run/dpdk/spdk_pid291323 00:43:01.994 Removing: /var/run/dpdk/spdk_pid294846 00:43:01.994 Removing: /var/run/dpdk/spdk_pid317769 00:43:01.994 Removing: /var/run/dpdk/spdk_pid320551 00:43:01.995 Removing: /var/run/dpdk/spdk_pid324446 00:43:01.995 Removing: /var/run/dpdk/spdk_pid325337 00:43:01.995 Removing: /var/run/dpdk/spdk_pid326357 00:43:01.995 Removing: /var/run/dpdk/spdk_pid327438 00:43:01.995 Removing: /var/run/dpdk/spdk_pid330263 00:43:01.995 Removing: /var/run/dpdk/spdk_pid332625 00:43:01.995 Removing: /var/run/dpdk/spdk_pid336857 00:43:01.995 Removing: /var/run/dpdk/spdk_pid336859 00:43:01.995 Removing: /var/run/dpdk/spdk_pid339757 00:43:01.995 Removing: /var/run/dpdk/spdk_pid339890 00:43:01.995 Removing: /var/run/dpdk/spdk_pid340032 00:43:01.995 Removing: /var/run/dpdk/spdk_pid340337 00:43:01.995 Removing: /var/run/dpdk/spdk_pid340425 00:43:01.995 Removing: /var/run/dpdk/spdk_pid341498 00:43:01.995 Removing: /var/run/dpdk/spdk_pid342673 00:43:01.995 Removing: /var/run/dpdk/spdk_pid343850 00:43:01.995 Removing: /var/run/dpdk/spdk_pid345024 00:43:01.995 Removing: /var/run/dpdk/spdk_pid346200 00:43:01.995 Removing: /var/run/dpdk/spdk_pid347497 00:43:01.995 Removing: /var/run/dpdk/spdk_pid351814 00:43:01.995 Removing: /var/run/dpdk/spdk_pid352270 00:43:02.254 Removing: /var/run/dpdk/spdk_pid353564 00:43:02.254 Removing: /var/run/dpdk/spdk_pid354369 00:43:02.254 Removing: /var/run/dpdk/spdk_pid358131 00:43:02.254 Removing: /var/run/dpdk/spdk_pid360008 00:43:02.254 Removing: /var/run/dpdk/spdk_pid363545 00:43:02.254 Removing: /var/run/dpdk/spdk_pid366884 00:43:02.254 Removing: /var/run/dpdk/spdk_pid373354 00:43:02.254 Removing: /var/run/dpdk/spdk_pid377703 00:43:02.254 Removing: /var/run/dpdk/spdk_pid377705 00:43:02.254 Removing: /var/run/dpdk/spdk_pid390832 00:43:02.254 Removing: /var/run/dpdk/spdk_pid391360 00:43:02.254 Removing: /var/run/dpdk/spdk_pid391760 00:43:02.254 Removing: /var/run/dpdk/spdk_pid392172 00:43:02.254 Removing: /var/run/dpdk/spdk_pid392747 00:43:02.254 Removing: /var/run/dpdk/spdk_pid393157 00:43:02.254 Removing: /var/run/dpdk/spdk_pid393689 00:43:02.254 Removing: /var/run/dpdk/spdk_pid394089 00:43:02.254 Removing: /var/run/dpdk/spdk_pid396599 00:43:02.254 Removing: /var/run/dpdk/spdk_pid396742 00:43:02.254 Removing: /var/run/dpdk/spdk_pid400534 00:43:02.254 Removing: /var/run/dpdk/spdk_pid400708 00:43:02.254 Removing: /var/run/dpdk/spdk_pid404062 00:43:02.254 Removing: /var/run/dpdk/spdk_pid406651 00:43:02.254 Removing: /var/run/dpdk/spdk_pid413570 00:43:02.254 Removing: /var/run/dpdk/spdk_pid414487 00:43:02.254 Removing: /var/run/dpdk/spdk_pid416991 00:43:02.254 Removing: /var/run/dpdk/spdk_pid417160 00:43:02.254 Removing: /var/run/dpdk/spdk_pid419761 00:43:02.254 Removing: /var/run/dpdk/spdk_pid423443 00:43:02.254 Removing: /var/run/dpdk/spdk_pid425495 00:43:02.254 Removing: /var/run/dpdk/spdk_pid431852 00:43:02.254 Removing: /var/run/dpdk/spdk_pid437051 00:43:02.254 Removing: /var/run/dpdk/spdk_pid438227 00:43:02.254 Removing: /var/run/dpdk/spdk_pid438888 00:43:02.254 Removing: /var/run/dpdk/spdk_pid449066 00:43:02.254 Removing: /var/run/dpdk/spdk_pid451429 00:43:02.254 Removing: /var/run/dpdk/spdk_pid453928 00:43:02.254 Removing: /var/run/dpdk/spdk_pid458969 00:43:02.254 Removing: /var/run/dpdk/spdk_pid458974 00:43:02.254 Removing: /var/run/dpdk/spdk_pid461870 00:43:02.254 Removing: /var/run/dpdk/spdk_pid463264 00:43:02.254 Removing: /var/run/dpdk/spdk_pid464546 00:43:02.254 Removing: /var/run/dpdk/spdk_pid465398 00:43:02.254 Removing: /var/run/dpdk/spdk_pid466807 00:43:02.254 Removing: /var/run/dpdk/spdk_pid467634 00:43:02.254 Removing: /var/run/dpdk/spdk_pid472974 00:43:02.254 Removing: /var/run/dpdk/spdk_pid473368 00:43:02.254 Removing: /var/run/dpdk/spdk_pid473761 00:43:02.254 Removing: /var/run/dpdk/spdk_pid475311 00:43:02.254 Removing: /var/run/dpdk/spdk_pid475590 00:43:02.254 Removing: /var/run/dpdk/spdk_pid475990 00:43:02.254 Removing: /var/run/dpdk/spdk_pid478437 00:43:02.254 Removing: /var/run/dpdk/spdk_pid478444 00:43:02.254 Removing: /var/run/dpdk/spdk_pid479909 00:43:02.254 Removing: /var/run/dpdk/spdk_pid480396 00:43:02.254 Removing: /var/run/dpdk/spdk_pid480402 00:43:02.254 Removing: /var/run/dpdk/spdk_pid98220 00:43:02.254 Removing: /var/run/dpdk/spdk_pid98957 00:43:02.254 Removing: /var/run/dpdk/spdk_pid99864 00:43:02.254 Clean 00:43:02.254 23:06:05 -- common/autotest_common.sh@1451 -- # return 0 00:43:02.254 23:06:05 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:43:02.254 23:06:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:02.254 23:06:05 -- common/autotest_common.sh@10 -- # set +x 00:43:02.254 23:06:05 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:43:02.254 23:06:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:02.254 23:06:05 -- common/autotest_common.sh@10 -- # set +x 00:43:02.512 23:06:05 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:02.512 23:06:05 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:02.512 23:06:05 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:02.512 23:06:05 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:43:02.512 23:06:05 -- spdk/autotest.sh@394 -- # hostname 00:43:02.512 23:06:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:02.512 geninfo: WARNING: invalid characters removed from testname! 00:43:34.602 23:06:35 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:37.145 23:06:39 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:39.688 23:06:42 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:42.986 23:06:45 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:46.287 23:06:48 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:48.832 23:06:51 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:52.133 23:06:54 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:52.133 23:06:55 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:43:52.133 23:06:55 -- common/autotest_common.sh@1691 -- $ lcov --version 00:43:52.133 23:06:55 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:43:52.133 23:06:55 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:43:52.133 23:06:55 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:43:52.133 23:06:55 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:43:52.133 23:06:55 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:43:52.133 23:06:55 -- scripts/common.sh@336 -- $ IFS=.-: 00:43:52.133 23:06:55 -- scripts/common.sh@336 -- $ read -ra ver1 00:43:52.133 23:06:55 -- scripts/common.sh@337 -- $ IFS=.-: 00:43:52.133 23:06:55 -- scripts/common.sh@337 -- $ read -ra ver2 00:43:52.133 23:06:55 -- scripts/common.sh@338 -- $ local 'op=<' 00:43:52.133 23:06:55 -- scripts/common.sh@340 -- $ ver1_l=2 00:43:52.133 23:06:55 -- scripts/common.sh@341 -- $ ver2_l=1 00:43:52.133 23:06:55 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:43:52.133 23:06:55 -- scripts/common.sh@344 -- $ case "$op" in 00:43:52.133 23:06:55 -- scripts/common.sh@345 -- $ : 1 00:43:52.133 23:06:55 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:43:52.133 23:06:55 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:52.134 23:06:55 -- scripts/common.sh@365 -- $ decimal 1 00:43:52.134 23:06:55 -- scripts/common.sh@353 -- $ local d=1 00:43:52.134 23:06:55 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:43:52.134 23:06:55 -- scripts/common.sh@355 -- $ echo 1 00:43:52.134 23:06:55 -- scripts/common.sh@365 -- $ ver1[v]=1 00:43:52.134 23:06:55 -- scripts/common.sh@366 -- $ decimal 2 00:43:52.134 23:06:55 -- scripts/common.sh@353 -- $ local d=2 00:43:52.134 23:06:55 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:43:52.134 23:06:55 -- scripts/common.sh@355 -- $ echo 2 00:43:52.134 23:06:55 -- scripts/common.sh@366 -- $ ver2[v]=2 00:43:52.134 23:06:55 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:43:52.134 23:06:55 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:43:52.134 23:06:55 -- scripts/common.sh@368 -- $ return 0 00:43:52.134 23:06:55 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:52.134 23:06:55 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:43:52.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:52.134 --rc genhtml_branch_coverage=1 00:43:52.134 --rc genhtml_function_coverage=1 00:43:52.134 --rc genhtml_legend=1 00:43:52.134 --rc geninfo_all_blocks=1 00:43:52.134 --rc geninfo_unexecuted_blocks=1 00:43:52.134 00:43:52.134 ' 00:43:52.134 23:06:55 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:43:52.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:52.134 --rc genhtml_branch_coverage=1 00:43:52.134 --rc genhtml_function_coverage=1 00:43:52.134 --rc genhtml_legend=1 00:43:52.134 --rc geninfo_all_blocks=1 00:43:52.134 --rc geninfo_unexecuted_blocks=1 00:43:52.134 00:43:52.134 ' 00:43:52.134 23:06:55 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:43:52.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:52.134 --rc genhtml_branch_coverage=1 00:43:52.134 --rc genhtml_function_coverage=1 00:43:52.134 --rc genhtml_legend=1 00:43:52.134 --rc geninfo_all_blocks=1 00:43:52.134 --rc geninfo_unexecuted_blocks=1 00:43:52.134 00:43:52.134 ' 00:43:52.134 23:06:55 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:43:52.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:52.134 --rc genhtml_branch_coverage=1 00:43:52.134 --rc genhtml_function_coverage=1 00:43:52.134 --rc genhtml_legend=1 00:43:52.134 --rc geninfo_all_blocks=1 00:43:52.134 --rc geninfo_unexecuted_blocks=1 00:43:52.134 00:43:52.134 ' 00:43:52.134 23:06:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:52.134 23:06:55 -- scripts/common.sh@15 -- $ shopt -s extglob 00:43:52.134 23:06:55 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:43:52.134 23:06:55 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:52.134 23:06:55 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:52.134 23:06:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:52.134 23:06:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:52.134 23:06:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:52.134 23:06:55 -- paths/export.sh@5 -- $ export PATH 00:43:52.134 23:06:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:52.134 23:06:55 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:43:52.134 23:06:55 -- common/autobuild_common.sh@486 -- $ date +%s 00:43:52.134 23:06:55 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728680815.XXXXXX 00:43:52.134 23:06:55 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728680815.gRRsWL 00:43:52.134 23:06:55 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:43:52.134 23:06:55 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:43:52.134 23:06:55 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:43:52.134 23:06:55 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:43:52.134 23:06:55 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:43:52.134 23:06:55 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:43:52.134 23:06:55 -- common/autobuild_common.sh@502 -- $ get_config_params 00:43:52.134 23:06:55 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:43:52.134 23:06:55 -- common/autotest_common.sh@10 -- $ set +x 00:43:52.134 23:06:55 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:43:52.134 23:06:55 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:43:52.134 23:06:55 -- pm/common@17 -- $ local monitor 00:43:52.134 23:06:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:52.134 23:06:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:52.134 23:06:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:52.134 23:06:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:52.134 23:06:55 -- pm/common@21 -- $ date +%s 00:43:52.134 23:06:55 -- pm/common@21 -- $ date +%s 00:43:52.134 23:06:55 -- pm/common@25 -- $ sleep 1 00:43:52.134 23:06:55 -- pm/common@21 -- $ date +%s 00:43:52.134 23:06:55 -- pm/common@21 -- $ date +%s 00:43:52.134 23:06:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728680815 00:43:52.134 23:06:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728680815 00:43:52.134 23:06:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728680815 00:43:52.134 23:06:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728680815 00:43:52.134 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728680815_collect-vmstat.pm.log 00:43:52.134 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728680815_collect-cpu-load.pm.log 00:43:52.134 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728680815_collect-cpu-temp.pm.log 00:43:52.134 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728680815_collect-bmc-pm.bmc.pm.log 00:43:53.075 23:06:56 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:43:53.075 23:06:56 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:43:53.075 23:06:56 -- spdk/autopackage.sh@14 -- $ timing_finish 00:43:53.075 23:06:56 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:53.076 23:06:56 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:53.076 23:06:56 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:53.076 23:06:56 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:43:53.076 23:06:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:43:53.076 23:06:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:43:53.076 23:06:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:53.076 23:06:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:43:53.076 23:06:56 -- pm/common@44 -- $ pid=493245 00:43:53.076 23:06:56 -- pm/common@50 -- $ kill -TERM 493245 00:43:53.076 23:06:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:53.076 23:06:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:43:53.076 23:06:56 -- pm/common@44 -- $ pid=493247 00:43:53.076 23:06:56 -- pm/common@50 -- $ kill -TERM 493247 00:43:53.076 23:06:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:53.076 23:06:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:43:53.076 23:06:56 -- pm/common@44 -- $ pid=493249 00:43:53.076 23:06:56 -- pm/common@50 -- $ kill -TERM 493249 00:43:53.076 23:06:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:53.076 23:06:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:43:53.076 23:06:56 -- pm/common@44 -- $ pid=493278 00:43:53.076 23:06:56 -- pm/common@50 -- $ sudo -E kill -TERM 493278 00:43:53.076 + [[ -n 6093 ]] 00:43:53.076 + sudo kill 6093 00:43:53.086 [Pipeline] } 00:43:53.102 [Pipeline] // stage 00:43:53.107 [Pipeline] } 00:43:53.120 [Pipeline] // timeout 00:43:53.125 [Pipeline] } 00:43:53.136 [Pipeline] // catchError 00:43:53.141 [Pipeline] } 00:43:53.154 [Pipeline] // wrap 00:43:53.160 [Pipeline] } 00:43:53.170 [Pipeline] // catchError 00:43:53.178 [Pipeline] stage 00:43:53.180 [Pipeline] { (Epilogue) 00:43:53.190 [Pipeline] catchError 00:43:53.191 [Pipeline] { 00:43:53.201 [Pipeline] echo 00:43:53.202 Cleanup processes 00:43:53.207 [Pipeline] sh 00:43:53.493 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:53.493 493442 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:43:53.493 493558 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:53.508 [Pipeline] sh 00:43:53.796 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:53.796 ++ grep -v 'sudo pgrep' 00:43:53.796 ++ awk '{print $1}' 00:43:53.796 + sudo kill -9 493442 00:43:53.809 [Pipeline] sh 00:43:54.096 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:06.313 [Pipeline] sh 00:44:06.605 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:06.605 Artifacts sizes are good 00:44:06.620 [Pipeline] archiveArtifacts 00:44:06.628 Archiving artifacts 00:44:07.127 [Pipeline] sh 00:44:07.413 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:07.429 [Pipeline] cleanWs 00:44:07.440 [WS-CLEANUP] Deleting project workspace... 00:44:07.440 [WS-CLEANUP] Deferred wipeout is used... 00:44:07.447 [WS-CLEANUP] done 00:44:07.449 [Pipeline] } 00:44:07.468 [Pipeline] // catchError 00:44:07.479 [Pipeline] sh 00:44:07.764 + logger -p user.info -t JENKINS-CI 00:44:07.772 [Pipeline] } 00:44:07.787 [Pipeline] // stage 00:44:07.793 [Pipeline] } 00:44:07.808 [Pipeline] // node 00:44:07.813 [Pipeline] End of Pipeline 00:44:07.861 Finished: SUCCESS